00:00:00.000 Started by upstream project "autotest-per-patch" build number 120918 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.279 > git --version # 'git version 2.39.2' 00:00:00.279 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.279 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.279 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.586 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.604 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.617 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:05.617 > git config core.sparsecheckout # timeout=10 00:00:05.630 > git read-tree -mu HEAD # timeout=10 00:00:05.647 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:05.667 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:05.668 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:05.787 [Pipeline] Start of Pipeline 00:00:05.805 [Pipeline] library 00:00:05.807 Loading library shm_lib@master 00:00:05.808 Library shm_lib@master is cached. Copying from home. 00:00:05.829 [Pipeline] node 00:00:20.832 Still waiting to schedule task 00:00:20.832 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:46.969 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:08:46.970 [Pipeline] { 00:08:46.984 [Pipeline] catchError 00:08:46.986 [Pipeline] { 00:08:47.003 [Pipeline] wrap 00:08:47.013 [Pipeline] { 00:08:47.023 [Pipeline] stage 00:08:47.025 [Pipeline] { (Prologue) 00:08:47.047 [Pipeline] echo 00:08:47.049 Node: VM-host-SM4 00:08:47.056 [Pipeline] cleanWs 00:08:47.066 [WS-CLEANUP] Deleting project workspace... 00:08:47.066 [WS-CLEANUP] Deferred wipeout is used... 00:08:47.072 [WS-CLEANUP] done 00:08:47.233 [Pipeline] setCustomBuildProperty 00:08:47.305 [Pipeline] nodesByLabel 00:08:47.310 Could not find any nodes with 'sorcerer' label 00:08:47.316 [Pipeline] retry 00:08:47.318 [Pipeline] { 00:08:47.339 [Pipeline] checkout 00:08:47.346 The recommended git tool is: git 00:08:47.356 using credential 00000000-0000-0000-0000-000000000002 00:08:47.361 Cloning the remote Git repository 00:08:47.363 Honoring refspec on initial clone 00:08:47.363 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:08:47.364 > git init /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp # timeout=10 00:08:47.373 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:08:47.373 > git --version # timeout=10 00:08:47.377 > git --version # 'git version 2.25.1' 00:08:47.377 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:08:47.377 Setting http proxy: proxy-dmz.intel.com:911 00:08:47.377 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:08:51.550 Avoid second fetch 00:08:51.568 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:08:51.532 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:08:51.536 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:08:51.550 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:08:51.560 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:08:51.567 > git config core.sparsecheckout # timeout=10 00:08:51.573 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:08:51.677 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:08:51.686 [Pipeline] } 00:08:51.706 [Pipeline] // retry 00:08:51.717 [Pipeline] nodesByLabel 00:08:51.719 Could not find any nodes with 'sorcerer' label 00:08:51.725 [Pipeline] retry 00:08:51.727 [Pipeline] { 00:08:51.748 [Pipeline] checkout 00:08:51.755 The recommended git tool is: NONE 00:08:51.765 using credential 00000000-0000-0000-0000-000000000002 00:08:51.770 Cloning the remote Git repository 00:08:51.773 Honoring refspec on initial clone 00:08:51.773 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:08:51.774 > git init /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk # timeout=10 00:08:51.784 Using reference repository: /var/ci_repos/spdk_multi 00:08:51.784 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:08:51.784 > git --version # timeout=10 00:08:51.788 > git --version # 'git version 2.25.1' 00:08:51.788 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:08:51.788 Setting http proxy: proxy-dmz.intel.com:911 00:08:51.788 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/51/22651/8 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:09:00.724 Avoid second fetch 00:09:00.764 Checking out Revision 3f3de12cc7e937fc54fc700678cc0d4709fbc6ae (FETCH_HEAD) 00:09:01.058 Commit message: "bdev: register and use trace owners" 00:09:00.658 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:09:00.663 > git config --add remote.origin.fetch refs/changes/51/22651/8 # timeout=10 00:09:00.669 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:09:00.724 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:09:00.757 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:09:00.764 > git config core.sparsecheckout # timeout=10 00:09:00.768 > git checkout -f 3f3de12cc7e937fc54fc700678cc0d4709fbc6ae # timeout=10 00:09:01.058 > git rev-list --no-walk bf2cbb6d8543df261aa0f405bc05f6ba2f1c608a # timeout=10 00:09:01.088 > git remote # timeout=10 00:09:01.092 > git submodule init # timeout=10 00:09:01.156 > git submodule sync # timeout=10 00:09:01.217 > git config --get remote.origin.url # timeout=10 00:09:01.225 > git submodule init # timeout=10 00:09:01.284 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:09:01.289 > git config --get submodule.dpdk.url # timeout=10 00:09:01.294 > git remote # timeout=10 00:09:01.301 > git config --get remote.origin.url # timeout=10 00:09:01.305 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:09:01.309 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:09:01.313 > git remote # timeout=10 00:09:01.319 > git config --get remote.origin.url # timeout=10 00:09:01.323 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:09:01.327 > git config --get submodule.isa-l.url # timeout=10 00:09:01.332 > git remote # timeout=10 00:09:01.337 > git config --get remote.origin.url # timeout=10 00:09:01.343 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:09:01.347 > git config --get submodule.ocf.url # timeout=10 00:09:01.352 > git remote # timeout=10 00:09:01.356 > git config --get remote.origin.url # timeout=10 00:09:01.361 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:09:01.364 > git config --get submodule.libvfio-user.url # timeout=10 00:09:01.368 > git remote # timeout=10 00:09:01.374 > git config --get remote.origin.url # timeout=10 00:09:01.377 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:09:01.381 > git config --get submodule.xnvme.url # timeout=10 00:09:01.385 > git remote # timeout=10 00:09:01.389 > git config --get remote.origin.url # timeout=10 00:09:01.393 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:09:01.397 > git config --get submodule.isa-l-crypto.url # timeout=10 00:09:01.401 > git remote # timeout=10 00:09:01.406 > git config --get remote.origin.url # timeout=10 00:09:01.411 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:09:01.416 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.416 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.416 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.417 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.417 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.417 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.417 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:09:01.417 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.417 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.417 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:09:01.417 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.417 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:09:01.417 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:09:01.417 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.417 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:09:01.417 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.417 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:09:01.418 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.418 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:09:01.418 Setting http proxy: proxy-dmz.intel.com:911 00:09:01.418 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:09:32.107 [Pipeline] dir 00:09:32.107 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:09:32.108 [Pipeline] { 00:09:32.118 [Pipeline] sh 00:09:32.390 ++ nproc 00:09:32.390 + threads=88 00:09:32.390 + git repack -a -d --threads=88 00:09:37.651 + git submodule foreach git repack -a -d --threads=88 00:09:37.651 Entering 'dpdk' 00:09:42.917 Entering 'intel-ipsec-mb' 00:09:42.917 Entering 'isa-l' 00:09:42.917 Entering 'isa-l-crypto' 00:09:42.917 Entering 'libvfio-user' 00:09:43.175 Entering 'ocf' 00:09:43.432 Entering 'xnvme' 00:09:43.690 + find .git -type f -name alternates -print -delete 00:09:43.690 .git/objects/info/alternates 00:09:43.690 .git/modules/dpdk/objects/info/alternates 00:09:43.690 .git/modules/ocf/objects/info/alternates 00:09:43.690 .git/modules/isa-l/objects/info/alternates 00:09:43.690 .git/modules/xnvme/objects/info/alternates 00:09:43.690 .git/modules/libvfio-user/objects/info/alternates 00:09:43.690 .git/modules/isa-l-crypto/objects/info/alternates 00:09:43.690 .git/modules/intel-ipsec-mb/objects/info/alternates 00:09:43.704 [Pipeline] } 00:09:43.727 [Pipeline] // dir 00:09:43.733 [Pipeline] } 00:09:43.755 [Pipeline] // retry 00:09:43.763 [Pipeline] sh 00:09:44.042 + git -C spdk log --oneline -n5 00:09:44.042 3f3de12cc bdev: register and use trace owners 00:09:44.042 dd92a7e9b nvmf/tcp: register and use trace owners 00:09:44.042 ce736be4b nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:09:44.042 7d846d5fa app/trace: emit owner descriptions 00:09:44.042 3173df1bf trace: rename trace_event's poller_id to owner_id 00:09:44.063 [Pipeline] writeFile 00:09:44.082 [Pipeline] sh 00:09:44.361 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:09:44.373 [Pipeline] sh 00:09:44.678 + cat autorun-spdk.conf 00:09:44.678 SPDK_TEST_UNITTEST=1 00:09:44.678 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:44.678 SPDK_TEST_NVME=1 00:09:44.678 SPDK_TEST_BLOCKDEV=1 00:09:44.678 SPDK_RUN_ASAN=1 00:09:44.678 SPDK_RUN_UBSAN=1 00:09:44.678 SPDK_TEST_RAID5=1 00:09:44.678 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:44.685 RUN_NIGHTLY=0 00:09:44.687 [Pipeline] } 00:09:44.702 [Pipeline] // stage 00:09:44.715 [Pipeline] stage 00:09:44.717 [Pipeline] { (Run VM) 00:09:44.731 [Pipeline] sh 00:09:45.010 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:09:45.010 + echo 'Start stage prepare_nvme.sh' 00:09:45.010 Start stage prepare_nvme.sh 00:09:45.010 + [[ -n 5 ]] 00:09:45.010 + disk_prefix=ex5 00:09:45.010 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_2 ]] 00:09:45.010 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf ]] 00:09:45.010 + source /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf 00:09:45.010 ++ SPDK_TEST_UNITTEST=1 00:09:45.010 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:45.010 ++ SPDK_TEST_NVME=1 00:09:45.010 ++ SPDK_TEST_BLOCKDEV=1 00:09:45.010 ++ SPDK_RUN_ASAN=1 00:09:45.010 ++ SPDK_RUN_UBSAN=1 00:09:45.010 ++ SPDK_TEST_RAID5=1 00:09:45.010 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:45.010 ++ RUN_NIGHTLY=0 00:09:45.010 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:09:45.010 + nvme_files=() 00:09:45.010 + declare -A nvme_files 00:09:45.010 + backend_dir=/var/lib/libvirt/images/backends 00:09:45.010 + nvme_files['nvme.img']=5G 00:09:45.010 + nvme_files['nvme-cmb.img']=5G 00:09:45.010 + nvme_files['nvme-multi0.img']=4G 00:09:45.010 + nvme_files['nvme-multi1.img']=4G 00:09:45.010 + nvme_files['nvme-multi2.img']=4G 00:09:45.010 + nvme_files['nvme-openstack.img']=8G 00:09:45.010 + nvme_files['nvme-zns.img']=5G 00:09:45.010 + (( SPDK_TEST_NVME_PMR == 1 )) 00:09:45.010 + (( SPDK_TEST_FTL == 1 )) 00:09:45.010 + (( SPDK_TEST_NVME_FDP == 1 )) 00:09:45.010 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:09:45.010 + for nvme in "${!nvme_files[@]}" 00:09:45.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:09:45.010 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:09:45.010 + for nvme in "${!nvme_files[@]}" 00:09:45.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:09:45.010 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:09:45.010 + for nvme in "${!nvme_files[@]}" 00:09:45.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:09:45.268 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:09:45.268 + for nvme in "${!nvme_files[@]}" 00:09:45.268 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:09:45.268 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:09:45.268 + for nvme in "${!nvme_files[@]}" 00:09:45.268 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:09:45.526 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:09:45.526 + for nvme in "${!nvme_files[@]}" 00:09:45.526 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:09:45.526 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:09:45.526 + for nvme in "${!nvme_files[@]}" 00:09:45.526 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:09:46.900 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:09:46.900 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:09:46.900 + echo 'End stage prepare_nvme.sh' 00:09:46.900 End stage prepare_nvme.sh 00:09:46.912 [Pipeline] sh 00:09:47.280 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:09:47.280 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2204 00:09:47.280 00:09:47.280 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant 00:09:47.280 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:09:47.280 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_2 00:09:47.280 HELP=0 00:09:47.280 DRY_RUN=0 00:09:47.280 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:09:47.280 NVME_DISKS_TYPE=nvme, 00:09:47.280 NVME_AUTO_CREATE=0 00:09:47.280 NVME_DISKS_NAMESPACES=, 00:09:47.280 NVME_CMB=, 00:09:47.280 NVME_PMR=, 00:09:47.280 NVME_ZNS=, 00:09:47.280 NVME_MS=, 00:09:47.280 NVME_FDP=, 00:09:47.280 SPDK_VAGRANT_DISTRO=ubuntu2204 00:09:47.280 SPDK_VAGRANT_VMCPU=10 00:09:47.280 SPDK_VAGRANT_VMRAM=12288 00:09:47.280 SPDK_VAGRANT_PROVIDER=libvirt 00:09:47.280 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:09:47.280 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:09:47.280 SPDK_OPENSTACK_NETWORK=0 00:09:47.280 VAGRANT_PACKAGE_BOX=0 00:09:47.280 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:09:47.280 FORCE_DISTRO=true 00:09:47.280 VAGRANT_BOX_VERSION= 00:09:47.280 EXTRA_VAGRANTFILES= 00:09:47.280 NIC_MODEL=e1000 00:09:47.280 00:09:47.280 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt' 00:09:47.280 /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:09:50.563 Bringing machine 'default' up with 'libvirt' provider... 00:09:51.498 ==> default: Creating image (snapshot of base box volume). 00:09:51.757 ==> default: Creating domain with the following settings... 00:09:51.757 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1713922791_aa2486753adca1e1d98e 00:09:51.757 ==> default: -- Domain type: kvm 00:09:51.757 ==> default: -- Cpus: 10 00:09:51.757 ==> default: -- Feature: acpi 00:09:51.757 ==> default: -- Feature: apic 00:09:51.757 ==> default: -- Feature: pae 00:09:51.757 ==> default: -- Memory: 12288M 00:09:51.757 ==> default: -- Memory Backing: hugepages: 00:09:51.757 ==> default: -- Management MAC: 00:09:51.757 ==> default: -- Loader: 00:09:51.757 ==> default: -- Nvram: 00:09:51.757 ==> default: -- Base box: spdk/ubuntu2204 00:09:51.757 ==> default: -- Storage pool: default 00:09:51.757 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1713922791_aa2486753adca1e1d98e.img (20G) 00:09:51.757 ==> default: -- Volume Cache: default 00:09:51.757 ==> default: -- Kernel: 00:09:51.757 ==> default: -- Initrd: 00:09:51.757 ==> default: -- Graphics Type: vnc 00:09:51.757 ==> default: -- Graphics Port: -1 00:09:51.757 ==> default: -- Graphics IP: 127.0.0.1 00:09:51.757 ==> default: -- Graphics Password: Not defined 00:09:51.757 ==> default: -- Video Type: cirrus 00:09:51.757 ==> default: -- Video VRAM: 9216 00:09:51.757 ==> default: -- Sound Type: 00:09:51.757 ==> default: -- Keymap: en-us 00:09:51.757 ==> default: -- TPM Path: 00:09:51.757 ==> default: -- INPUT: type=mouse, bus=ps2 00:09:51.757 ==> default: -- Command line args: 00:09:51.757 ==> default: -> value=-device, 00:09:51.757 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:09:51.757 ==> default: -> value=-drive, 00:09:51.757 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:09:51.757 ==> default: -> value=-device, 00:09:51.757 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:51.757 ==> default: Creating shared folders metadata... 00:09:51.757 ==> default: Starting domain. 00:09:53.660 ==> default: Waiting for domain to get an IP address... 00:10:05.873 ==> default: Waiting for SSH to become available... 00:10:07.296 ==> default: Configuring and enabling network interfaces... 00:10:12.561 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:10:17.830 ==> default: Mounting SSHFS shared folder... 00:10:18.763 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:10:18.763 ==> default: Checking Mount.. 00:10:19.698 ==> default: Folder Successfully Mounted! 00:10:19.698 ==> default: Running provisioner: file... 00:10:19.956 default: ~/.gitconfig => .gitconfig 00:10:20.524 00:10:20.524 SUCCESS! 00:10:20.524 00:10:20.524 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:10:20.524 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:10:20.524 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt" to destroy all trace of vm. 00:10:20.524 00:10:20.533 [Pipeline] } 00:10:20.551 [Pipeline] // stage 00:10:20.561 [Pipeline] dir 00:10:20.561 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt 00:10:20.563 [Pipeline] { 00:10:20.576 [Pipeline] catchError 00:10:20.578 [Pipeline] { 00:10:20.592 [Pipeline] sh 00:10:20.870 + vagrant ssh-config --host vagrant 00:10:20.870 + sed -ne /^Host/,$p 00:10:20.870 + tee ssh_conf 00:10:25.064 Host vagrant 00:10:25.064 HostName 192.168.121.223 00:10:25.064 User vagrant 00:10:25.064 Port 22 00:10:25.064 UserKnownHostsFile /dev/null 00:10:25.064 StrictHostKeyChecking no 00:10:25.064 PasswordAuthentication no 00:10:25.064 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:10:25.064 IdentitiesOnly yes 00:10:25.064 LogLevel FATAL 00:10:25.064 ForwardAgent yes 00:10:25.064 ForwardX11 yes 00:10:25.064 00:10:25.076 [Pipeline] withEnv 00:10:25.079 [Pipeline] { 00:10:25.095 [Pipeline] sh 00:10:25.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:10:25.376 source /etc/os-release 00:10:25.377 [[ -e /image.version ]] && img=$(< /image.version) 00:10:25.377 # Minimal, systemd-like check. 00:10:25.377 if [[ -e /.dockerenv ]]; then 00:10:25.377 # Clear garbage from the node's name: 00:10:25.377 # agt-er_autotest_547-896 -> autotest_547-896 00:10:25.377 # $HOSTNAME is the actual container id 00:10:25.377 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:10:25.377 if mountpoint -q /etc/hostname; then 00:10:25.377 # We can assume this is a mount from a host where container is running, 00:10:25.377 # so fetch its hostname to easily identify the target swarm worker. 00:10:25.377 container="$(< /etc/hostname) ($agent)" 00:10:25.377 else 00:10:25.377 # Fallback 00:10:25.377 container=$agent 00:10:25.377 fi 00:10:25.377 fi 00:10:25.377 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:10:25.377 00:10:25.646 [Pipeline] } 00:10:25.666 [Pipeline] // withEnv 00:10:25.676 [Pipeline] setCustomBuildProperty 00:10:25.720 [Pipeline] stage 00:10:25.727 [Pipeline] { (Tests) 00:10:25.765 [Pipeline] sh 00:10:26.050 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:10:26.322 [Pipeline] timeout 00:10:26.322 Timeout set to expire in 1 hr 0 min 00:10:26.323 [Pipeline] { 00:10:26.342 [Pipeline] sh 00:10:26.689 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:10:27.256 HEAD is now at 3f3de12cc bdev: register and use trace owners 00:10:27.270 [Pipeline] sh 00:10:27.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:10:27.825 [Pipeline] sh 00:10:28.105 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:10:28.379 [Pipeline] sh 00:10:28.659 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:10:28.917 ++ readlink -f spdk_repo 00:10:28.917 + DIR_ROOT=/home/vagrant/spdk_repo 00:10:28.917 + [[ -n /home/vagrant/spdk_repo ]] 00:10:28.917 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:10:28.917 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:10:28.917 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:10:28.917 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:10:28.917 + [[ -d /home/vagrant/spdk_repo/output ]] 00:10:28.917 + cd /home/vagrant/spdk_repo 00:10:28.917 + source /etc/os-release 00:10:28.917 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:10:28.917 ++ NAME=Ubuntu 00:10:28.917 ++ VERSION_ID=22.04 00:10:28.917 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:10:28.917 ++ VERSION_CODENAME=jammy 00:10:28.917 ++ ID=ubuntu 00:10:28.917 ++ ID_LIKE=debian 00:10:28.917 ++ HOME_URL=https://www.ubuntu.com/ 00:10:28.917 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:10:28.917 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:10:28.917 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:10:28.917 ++ UBUNTU_CODENAME=jammy 00:10:28.917 + uname -a 00:10:28.917 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:10:28.917 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:29.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:10:29.176 Hugepages 00:10:29.176 node hugesize free / total 00:10:29.176 node0 1048576kB 0 / 0 00:10:29.176 node0 2048kB 0 / 0 00:10:29.176 00:10:29.176 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:29.176 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:29.176 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:29.176 + rm -f /tmp/spdk-ld-path 00:10:29.436 + source autorun-spdk.conf 00:10:29.436 ++ SPDK_TEST_UNITTEST=1 00:10:29.436 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:29.436 ++ SPDK_TEST_NVME=1 00:10:29.436 ++ SPDK_TEST_BLOCKDEV=1 00:10:29.436 ++ SPDK_RUN_ASAN=1 00:10:29.436 ++ SPDK_RUN_UBSAN=1 00:10:29.436 ++ SPDK_TEST_RAID5=1 00:10:29.436 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:29.436 ++ RUN_NIGHTLY=0 00:10:29.436 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:10:29.436 + [[ -n '' ]] 00:10:29.436 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:10:29.436 + for M in /var/spdk/build-*-manifest.txt 00:10:29.436 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:10:29.436 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:29.436 + for M in /var/spdk/build-*-manifest.txt 00:10:29.436 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:10:29.436 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:29.436 ++ uname 00:10:29.436 + [[ Linux == \L\i\n\u\x ]] 00:10:29.436 + sudo dmesg -T 00:10:29.436 + sudo dmesg --clear 00:10:29.436 + dmesg_pid=2100 00:10:29.436 + sudo dmesg -Tw 00:10:29.436 + [[ Ubuntu == FreeBSD ]] 00:10:29.436 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.436 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.436 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:10:29.436 + [[ -x /usr/src/fio-static/fio ]] 00:10:29.436 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:10:29.436 + [[ ! -v VFIO_QEMU_BIN ]] 00:10:29.436 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:10:29.436 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:10:29.436 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:10:29.436 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:10:29.436 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:10:29.436 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:29.436 Test configuration: 00:10:29.436 SPDK_TEST_UNITTEST=1 00:10:29.436 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:29.436 SPDK_TEST_NVME=1 00:10:29.436 SPDK_TEST_BLOCKDEV=1 00:10:29.436 SPDK_RUN_ASAN=1 00:10:29.436 SPDK_RUN_UBSAN=1 00:10:29.436 SPDK_TEST_RAID5=1 00:10:29.436 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:29.436 RUN_NIGHTLY=0 01:40:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.436 01:40:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:29.436 01:40:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.436 01:40:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.436 01:40:28 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:29.436 01:40:28 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:29.436 01:40:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:29.436 01:40:28 -- paths/export.sh@5 -- $ export PATH 00:10:29.436 01:40:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:29.436 01:40:28 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:10:29.436 01:40:28 -- common/autobuild_common.sh@435 -- $ date +%s 00:10:29.436 01:40:28 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713922828.XXXXXX 00:10:29.436 01:40:28 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713922828.UGIk0V 00:10:29.436 01:40:28 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:10:29.436 01:40:28 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:10:29.436 01:40:28 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:10:29.436 01:40:28 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:10:29.436 01:40:28 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:10:29.436 01:40:28 -- common/autobuild_common.sh@451 -- $ get_config_params 00:10:29.436 01:40:28 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:10:29.436 01:40:28 -- common/autotest_common.sh@10 -- $ set +x 00:10:29.436 01:40:28 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:10:29.436 01:40:28 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:10:29.436 01:40:28 -- pm/common@17 -- $ local monitor 00:10:29.436 01:40:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:29.436 01:40:28 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2136 00:10:29.436 01:40:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:29.436 01:40:28 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2138 00:10:29.436 01:40:28 -- pm/common@21 -- $ date +%s 00:10:29.436 01:40:28 -- pm/common@26 -- $ sleep 1 00:10:29.436 01:40:28 -- pm/common@21 -- $ date +%s 00:10:29.436 01:40:28 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713922828 00:10:29.436 01:40:28 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713922828 00:10:29.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713922828_collect-vmstat.pm.log 00:10:29.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713922828_collect-cpu-load.pm.log 00:10:30.639 01:40:29 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:10:30.639 01:40:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:10:30.639 01:40:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:10:30.639 01:40:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:30.639 01:40:29 -- spdk/autobuild.sh@16 -- $ date -u 00:10:30.639 Wed Apr 24 01:40:29 UTC 2024 00:10:30.639 01:40:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:10:30.639 v24.05-pre-445-g3f3de12cc 00:10:30.639 01:40:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:10:30.639 01:40:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:10:30.639 01:40:29 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:10:30.639 01:40:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:30.639 01:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.639 ************************************ 00:10:30.639 START TEST asan 00:10:30.639 ************************************ 00:10:30.639 using asan 00:10:30.639 01:40:29 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:10:30.639 00:10:30.639 real 0m0.000s 00:10:30.639 user 0m0.000s 00:10:30.639 sys 0m0.000s 00:10:30.639 01:40:29 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:30.639 01:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.639 ************************************ 00:10:30.639 END TEST asan 00:10:30.639 ************************************ 00:10:30.639 01:40:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:10:30.639 01:40:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:10:30.639 01:40:29 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:10:30.639 01:40:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:30.639 01:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.639 ************************************ 00:10:30.639 START TEST ubsan 00:10:30.639 ************************************ 00:10:30.639 using ubsan 00:10:30.639 01:40:29 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:10:30.639 00:10:30.639 real 0m0.000s 00:10:30.639 user 0m0.000s 00:10:30.639 sys 0m0.000s 00:10:30.639 01:40:29 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:30.639 01:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.639 ************************************ 00:10:30.639 END TEST ubsan 00:10:30.639 ************************************ 00:10:30.898 01:40:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:30.898 01:40:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:30.898 01:40:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:30.898 01:40:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:30.898 01:40:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:30.898 01:40:29 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:10:30.898 01:40:29 -- spdk/autobuild.sh@58 -- $ unittest_build 00:10:30.898 01:40:29 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:10:30.898 01:40:29 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:10:30.898 01:40:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:30.898 01:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.898 ************************************ 00:10:30.898 START TEST unittest_build 00:10:30.898 ************************************ 00:10:30.898 01:40:29 -- common/autotest_common.sh@1111 -- $ _unittest_build 00:10:30.898 01:40:29 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:10:30.898 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:30.898 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:31.465 Using 'verbs' RDMA provider 00:10:47.332 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:02.240 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:02.500 Creating mk/config.mk...done. 00:11:02.500 Creating mk/cc.flags.mk...done. 00:11:02.500 Type 'make' to build. 00:11:02.500 01:41:02 -- common/autobuild_common.sh@403 -- $ make -j10 00:11:02.758 make[1]: Nothing to be done for 'all'. 00:11:17.684 The Meson build system 00:11:17.684 Version: 1.4.0 00:11:17.684 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:11:17.684 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:11:17.684 Build type: native build 00:11:17.684 Program cat found: YES (/usr/bin/cat) 00:11:17.684 Project name: DPDK 00:11:17.684 Project version: 23.11.0 00:11:17.684 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:11:17.684 C linker for the host machine: cc ld.bfd 2.38 00:11:17.684 Host machine cpu family: x86_64 00:11:17.684 Host machine cpu: x86_64 00:11:17.684 Message: ## Building in Developer Mode ## 00:11:17.684 Program pkg-config found: YES (/usr/bin/pkg-config) 00:11:17.684 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:11:17.684 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:11:17.684 Program python3 found: YES (/usr/bin/python3) 00:11:17.684 Program cat found: YES (/usr/bin/cat) 00:11:17.684 Compiler for C supports arguments -march=native: YES 00:11:17.684 Checking for size of "void *" : 8 00:11:17.684 Checking for size of "void *" : 8 (cached) 00:11:17.684 Library m found: YES 00:11:17.684 Library numa found: YES 00:11:17.684 Has header "numaif.h" : YES 00:11:17.684 Library fdt found: NO 00:11:17.684 Library execinfo found: NO 00:11:17.684 Has header "execinfo.h" : YES 00:11:17.684 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:11:17.684 Run-time dependency libarchive found: NO (tried pkgconfig) 00:11:17.684 Run-time dependency libbsd found: NO (tried pkgconfig) 00:11:17.684 Run-time dependency jansson found: NO (tried pkgconfig) 00:11:17.684 Run-time dependency openssl found: YES 3.0.2 00:11:17.684 Run-time dependency libpcap found: NO (tried pkgconfig) 00:11:17.684 Library pcap found: NO 00:11:17.684 Compiler for C supports arguments -Wcast-qual: YES 00:11:17.684 Compiler for C supports arguments -Wdeprecated: YES 00:11:17.684 Compiler for C supports arguments -Wformat: YES 00:11:17.684 Compiler for C supports arguments -Wformat-nonliteral: YES 00:11:17.684 Compiler for C supports arguments -Wformat-security: YES 00:11:17.684 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:17.684 Compiler for C supports arguments -Wmissing-prototypes: YES 00:11:17.684 Compiler for C supports arguments -Wnested-externs: YES 00:11:17.684 Compiler for C supports arguments -Wold-style-definition: YES 00:11:17.684 Compiler for C supports arguments -Wpointer-arith: YES 00:11:17.684 Compiler for C supports arguments -Wsign-compare: YES 00:11:17.684 Compiler for C supports arguments -Wstrict-prototypes: YES 00:11:17.684 Compiler for C supports arguments -Wundef: YES 00:11:17.684 Compiler for C supports arguments -Wwrite-strings: YES 00:11:17.684 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:11:17.684 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:11:17.684 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:17.684 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:11:17.684 Program objdump found: YES (/usr/bin/objdump) 00:11:17.684 Compiler for C supports arguments -mavx512f: YES 00:11:17.684 Checking if "AVX512 checking" compiles: YES 00:11:17.684 Fetching value of define "__SSE4_2__" : 1 00:11:17.684 Fetching value of define "__AES__" : 1 00:11:17.684 Fetching value of define "__AVX__" : 1 00:11:17.684 Fetching value of define "__AVX2__" : 1 00:11:17.684 Fetching value of define "__AVX512BW__" : 1 00:11:17.684 Fetching value of define "__AVX512CD__" : 1 00:11:17.684 Fetching value of define "__AVX512DQ__" : 1 00:11:17.684 Fetching value of define "__AVX512F__" : 1 00:11:17.685 Fetching value of define "__AVX512VL__" : 1 00:11:17.685 Fetching value of define "__PCLMUL__" : 1 00:11:17.685 Fetching value of define "__RDRND__" : 1 00:11:17.685 Fetching value of define "__RDSEED__" : 1 00:11:17.685 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:11:17.685 Fetching value of define "__znver1__" : (undefined) 00:11:17.685 Fetching value of define "__znver2__" : (undefined) 00:11:17.685 Fetching value of define "__znver3__" : (undefined) 00:11:17.685 Fetching value of define "__znver4__" : (undefined) 00:11:17.685 Library asan found: YES 00:11:17.685 Compiler for C supports arguments -Wno-format-truncation: YES 00:11:17.685 Message: lib/log: Defining dependency "log" 00:11:17.685 Message: lib/kvargs: Defining dependency "kvargs" 00:11:17.685 Message: lib/telemetry: Defining dependency "telemetry" 00:11:17.685 Library rt found: YES 00:11:17.685 Checking for function "getentropy" : NO 00:11:17.685 Message: lib/eal: Defining dependency "eal" 00:11:17.685 Message: lib/ring: Defining dependency "ring" 00:11:17.685 Message: lib/rcu: Defining dependency "rcu" 00:11:17.685 Message: lib/mempool: Defining dependency "mempool" 00:11:17.685 Message: lib/mbuf: Defining dependency "mbuf" 00:11:17.685 Fetching value of define "__PCLMUL__" : 1 (cached) 00:11:17.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:17.685 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:17.685 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:11:17.685 Fetching value of define "__AVX512VL__" : 1 (cached) 00:11:17.685 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:11:17.685 Compiler for C supports arguments -mpclmul: YES 00:11:17.685 Compiler for C supports arguments -maes: YES 00:11:17.685 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:17.685 Compiler for C supports arguments -mavx512bw: YES 00:11:17.685 Compiler for C supports arguments -mavx512dq: YES 00:11:17.685 Compiler for C supports arguments -mavx512vl: YES 00:11:17.685 Compiler for C supports arguments -mvpclmulqdq: YES 00:11:17.685 Compiler for C supports arguments -mavx2: YES 00:11:17.685 Compiler for C supports arguments -mavx: YES 00:11:17.685 Message: lib/net: Defining dependency "net" 00:11:17.685 Message: lib/meter: Defining dependency "meter" 00:11:17.685 Message: lib/ethdev: Defining dependency "ethdev" 00:11:17.685 Message: lib/pci: Defining dependency "pci" 00:11:17.685 Message: lib/cmdline: Defining dependency "cmdline" 00:11:17.685 Message: lib/hash: Defining dependency "hash" 00:11:17.685 Message: lib/timer: Defining dependency "timer" 00:11:17.685 Message: lib/compressdev: Defining dependency "compressdev" 00:11:17.685 Message: lib/cryptodev: Defining dependency "cryptodev" 00:11:17.685 Message: lib/dmadev: Defining dependency "dmadev" 00:11:17.685 Compiler for C supports arguments -Wno-cast-qual: YES 00:11:17.685 Message: lib/power: Defining dependency "power" 00:11:17.685 Message: lib/reorder: Defining dependency "reorder" 00:11:17.685 Message: lib/security: Defining dependency "security" 00:11:17.685 Has header "linux/userfaultfd.h" : YES 00:11:17.685 Has header "linux/vduse.h" : YES 00:11:17.685 Message: lib/vhost: Defining dependency "vhost" 00:11:17.685 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:11:17.685 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:11:17.685 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:11:17.685 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:11:17.685 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:11:17.685 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:11:17.685 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:11:17.685 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:11:17.685 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:11:17.685 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:11:17.685 Program doxygen found: YES (/usr/bin/doxygen) 00:11:17.685 Configuring doxy-api-html.conf using configuration 00:11:17.685 Configuring doxy-api-man.conf using configuration 00:11:17.685 Program mandb found: YES (/usr/bin/mandb) 00:11:17.685 Program sphinx-build found: NO 00:11:17.685 Configuring rte_build_config.h using configuration 00:11:17.685 Message: 00:11:17.685 ================= 00:11:17.685 Applications Enabled 00:11:17.685 ================= 00:11:17.685 00:11:17.685 apps: 00:11:17.685 00:11:17.685 00:11:17.685 Message: 00:11:17.685 ================= 00:11:17.685 Libraries Enabled 00:11:17.685 ================= 00:11:17.685 00:11:17.685 libs: 00:11:17.685 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:11:17.685 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:11:17.685 cryptodev, dmadev, power, reorder, security, vhost, 00:11:17.685 00:11:17.685 Message: 00:11:17.685 =============== 00:11:17.685 Drivers Enabled 00:11:17.685 =============== 00:11:17.685 00:11:17.685 common: 00:11:17.685 00:11:17.685 bus: 00:11:17.685 pci, vdev, 00:11:17.685 mempool: 00:11:17.685 ring, 00:11:17.685 dma: 00:11:17.685 00:11:17.685 net: 00:11:17.685 00:11:17.685 crypto: 00:11:17.685 00:11:17.685 compress: 00:11:17.685 00:11:17.685 vdpa: 00:11:17.685 00:11:17.685 00:11:17.685 Message: 00:11:17.685 ================= 00:11:17.685 Content Skipped 00:11:17.685 ================= 00:11:17.685 00:11:17.685 apps: 00:11:17.685 dumpcap: explicitly disabled via build config 00:11:17.685 graph: explicitly disabled via build config 00:11:17.685 pdump: explicitly disabled via build config 00:11:17.685 proc-info: explicitly disabled via build config 00:11:17.685 test-acl: explicitly disabled via build config 00:11:17.685 test-bbdev: explicitly disabled via build config 00:11:17.685 test-cmdline: explicitly disabled via build config 00:11:17.685 test-compress-perf: explicitly disabled via build config 00:11:17.685 test-crypto-perf: explicitly disabled via build config 00:11:17.685 test-dma-perf: explicitly disabled via build config 00:11:17.685 test-eventdev: explicitly disabled via build config 00:11:17.685 test-fib: explicitly disabled via build config 00:11:17.685 test-flow-perf: explicitly disabled via build config 00:11:17.685 test-gpudev: explicitly disabled via build config 00:11:17.685 test-mldev: explicitly disabled via build config 00:11:17.685 test-pipeline: explicitly disabled via build config 00:11:17.685 test-pmd: explicitly disabled via build config 00:11:17.685 test-regex: explicitly disabled via build config 00:11:17.685 test-sad: explicitly disabled via build config 00:11:17.685 test-security-perf: explicitly disabled via build config 00:11:17.685 00:11:17.685 libs: 00:11:17.685 metrics: explicitly disabled via build config 00:11:17.685 acl: explicitly disabled via build config 00:11:17.685 bbdev: explicitly disabled via build config 00:11:17.685 bitratestats: explicitly disabled via build config 00:11:17.685 bpf: explicitly disabled via build config 00:11:17.685 cfgfile: explicitly disabled via build config 00:11:17.685 distributor: explicitly disabled via build config 00:11:17.685 efd: explicitly disabled via build config 00:11:17.685 eventdev: explicitly disabled via build config 00:11:17.685 dispatcher: explicitly disabled via build config 00:11:17.685 gpudev: explicitly disabled via build config 00:11:17.685 gro: explicitly disabled via build config 00:11:17.685 gso: explicitly disabled via build config 00:11:17.685 ip_frag: explicitly disabled via build config 00:11:17.685 jobstats: explicitly disabled via build config 00:11:17.685 latencystats: explicitly disabled via build config 00:11:17.685 lpm: explicitly disabled via build config 00:11:17.685 member: explicitly disabled via build config 00:11:17.685 pcapng: explicitly disabled via build config 00:11:17.685 rawdev: explicitly disabled via build config 00:11:17.685 regexdev: explicitly disabled via build config 00:11:17.685 mldev: explicitly disabled via build config 00:11:17.685 rib: explicitly disabled via build config 00:11:17.685 sched: explicitly disabled via build config 00:11:17.685 stack: explicitly disabled via build config 00:11:17.685 ipsec: explicitly disabled via build config 00:11:17.685 pdcp: explicitly disabled via build config 00:11:17.685 fib: explicitly disabled via build config 00:11:17.685 port: explicitly disabled via build config 00:11:17.685 pdump: explicitly disabled via build config 00:11:17.685 table: explicitly disabled via build config 00:11:17.685 pipeline: explicitly disabled via build config 00:11:17.685 graph: explicitly disabled via build config 00:11:17.685 node: explicitly disabled via build config 00:11:17.685 00:11:17.685 drivers: 00:11:17.685 common/cpt: not in enabled drivers build config 00:11:17.685 common/dpaax: not in enabled drivers build config 00:11:17.685 common/iavf: not in enabled drivers build config 00:11:17.685 common/idpf: not in enabled drivers build config 00:11:17.685 common/mvep: not in enabled drivers build config 00:11:17.685 common/octeontx: not in enabled drivers build config 00:11:17.685 bus/auxiliary: not in enabled drivers build config 00:11:17.685 bus/cdx: not in enabled drivers build config 00:11:17.685 bus/dpaa: not in enabled drivers build config 00:11:17.685 bus/fslmc: not in enabled drivers build config 00:11:17.685 bus/ifpga: not in enabled drivers build config 00:11:17.685 bus/platform: not in enabled drivers build config 00:11:17.685 bus/vmbus: not in enabled drivers build config 00:11:17.685 common/cnxk: not in enabled drivers build config 00:11:17.685 common/mlx5: not in enabled drivers build config 00:11:17.685 common/nfp: not in enabled drivers build config 00:11:17.685 common/qat: not in enabled drivers build config 00:11:17.685 common/sfc_efx: not in enabled drivers build config 00:11:17.685 mempool/bucket: not in enabled drivers build config 00:11:17.685 mempool/cnxk: not in enabled drivers build config 00:11:17.685 mempool/dpaa: not in enabled drivers build config 00:11:17.685 mempool/dpaa2: not in enabled drivers build config 00:11:17.685 mempool/octeontx: not in enabled drivers build config 00:11:17.685 mempool/stack: not in enabled drivers build config 00:11:17.685 dma/cnxk: not in enabled drivers build config 00:11:17.685 dma/dpaa: not in enabled drivers build config 00:11:17.685 dma/dpaa2: not in enabled drivers build config 00:11:17.685 dma/hisilicon: not in enabled drivers build config 00:11:17.685 dma/idxd: not in enabled drivers build config 00:11:17.685 dma/ioat: not in enabled drivers build config 00:11:17.685 dma/skeleton: not in enabled drivers build config 00:11:17.686 net/af_packet: not in enabled drivers build config 00:11:17.686 net/af_xdp: not in enabled drivers build config 00:11:17.686 net/ark: not in enabled drivers build config 00:11:17.686 net/atlantic: not in enabled drivers build config 00:11:17.686 net/avp: not in enabled drivers build config 00:11:17.686 net/axgbe: not in enabled drivers build config 00:11:17.686 net/bnx2x: not in enabled drivers build config 00:11:17.686 net/bnxt: not in enabled drivers build config 00:11:17.686 net/bonding: not in enabled drivers build config 00:11:17.686 net/cnxk: not in enabled drivers build config 00:11:17.686 net/cpfl: not in enabled drivers build config 00:11:17.686 net/cxgbe: not in enabled drivers build config 00:11:17.686 net/dpaa: not in enabled drivers build config 00:11:17.686 net/dpaa2: not in enabled drivers build config 00:11:17.686 net/e1000: not in enabled drivers build config 00:11:17.686 net/ena: not in enabled drivers build config 00:11:17.686 net/enetc: not in enabled drivers build config 00:11:17.686 net/enetfec: not in enabled drivers build config 00:11:17.686 net/enic: not in enabled drivers build config 00:11:17.686 net/failsafe: not in enabled drivers build config 00:11:17.686 net/fm10k: not in enabled drivers build config 00:11:17.686 net/gve: not in enabled drivers build config 00:11:17.686 net/hinic: not in enabled drivers build config 00:11:17.686 net/hns3: not in enabled drivers build config 00:11:17.686 net/i40e: not in enabled drivers build config 00:11:17.686 net/iavf: not in enabled drivers build config 00:11:17.686 net/ice: not in enabled drivers build config 00:11:17.686 net/idpf: not in enabled drivers build config 00:11:17.686 net/igc: not in enabled drivers build config 00:11:17.686 net/ionic: not in enabled drivers build config 00:11:17.686 net/ipn3ke: not in enabled drivers build config 00:11:17.686 net/ixgbe: not in enabled drivers build config 00:11:17.686 net/mana: not in enabled drivers build config 00:11:17.686 net/memif: not in enabled drivers build config 00:11:17.686 net/mlx4: not in enabled drivers build config 00:11:17.686 net/mlx5: not in enabled drivers build config 00:11:17.686 net/mvneta: not in enabled drivers build config 00:11:17.686 net/mvpp2: not in enabled drivers build config 00:11:17.686 net/netvsc: not in enabled drivers build config 00:11:17.686 net/nfb: not in enabled drivers build config 00:11:17.686 net/nfp: not in enabled drivers build config 00:11:17.686 net/ngbe: not in enabled drivers build config 00:11:17.686 net/null: not in enabled drivers build config 00:11:17.686 net/octeontx: not in enabled drivers build config 00:11:17.686 net/octeon_ep: not in enabled drivers build config 00:11:17.686 net/pcap: not in enabled drivers build config 00:11:17.686 net/pfe: not in enabled drivers build config 00:11:17.686 net/qede: not in enabled drivers build config 00:11:17.686 net/ring: not in enabled drivers build config 00:11:17.686 net/sfc: not in enabled drivers build config 00:11:17.686 net/softnic: not in enabled drivers build config 00:11:17.686 net/tap: not in enabled drivers build config 00:11:17.686 net/thunderx: not in enabled drivers build config 00:11:17.686 net/txgbe: not in enabled drivers build config 00:11:17.686 net/vdev_netvsc: not in enabled drivers build config 00:11:17.686 net/vhost: not in enabled drivers build config 00:11:17.686 net/virtio: not in enabled drivers build config 00:11:17.686 net/vmxnet3: not in enabled drivers build config 00:11:17.686 raw/*: missing internal dependency, "rawdev" 00:11:17.686 crypto/armv8: not in enabled drivers build config 00:11:17.686 crypto/bcmfs: not in enabled drivers build config 00:11:17.686 crypto/caam_jr: not in enabled drivers build config 00:11:17.686 crypto/ccp: not in enabled drivers build config 00:11:17.686 crypto/cnxk: not in enabled drivers build config 00:11:17.686 crypto/dpaa_sec: not in enabled drivers build config 00:11:17.686 crypto/dpaa2_sec: not in enabled drivers build config 00:11:17.686 crypto/ipsec_mb: not in enabled drivers build config 00:11:17.686 crypto/mlx5: not in enabled drivers build config 00:11:17.686 crypto/mvsam: not in enabled drivers build config 00:11:17.686 crypto/nitrox: not in enabled drivers build config 00:11:17.686 crypto/null: not in enabled drivers build config 00:11:17.686 crypto/octeontx: not in enabled drivers build config 00:11:17.686 crypto/openssl: not in enabled drivers build config 00:11:17.686 crypto/scheduler: not in enabled drivers build config 00:11:17.686 crypto/uadk: not in enabled drivers build config 00:11:17.686 crypto/virtio: not in enabled drivers build config 00:11:17.686 compress/isal: not in enabled drivers build config 00:11:17.686 compress/mlx5: not in enabled drivers build config 00:11:17.686 compress/octeontx: not in enabled drivers build config 00:11:17.686 compress/zlib: not in enabled drivers build config 00:11:17.686 regex/*: missing internal dependency, "regexdev" 00:11:17.686 ml/*: missing internal dependency, "mldev" 00:11:17.686 vdpa/ifc: not in enabled drivers build config 00:11:17.686 vdpa/mlx5: not in enabled drivers build config 00:11:17.686 vdpa/nfp: not in enabled drivers build config 00:11:17.686 vdpa/sfc: not in enabled drivers build config 00:11:17.686 event/*: missing internal dependency, "eventdev" 00:11:17.686 baseband/*: missing internal dependency, "bbdev" 00:11:17.686 gpu/*: missing internal dependency, "gpudev" 00:11:17.686 00:11:17.686 00:11:17.686 Build targets in project: 85 00:11:17.686 00:11:17.686 DPDK 23.11.0 00:11:17.686 00:11:17.686 User defined options 00:11:17.686 buildtype : debug 00:11:17.686 default_library : static 00:11:17.686 libdir : lib 00:11:17.686 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:17.686 b_sanitize : address 00:11:17.686 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:11:17.686 c_link_args : 00:11:17.686 cpu_instruction_set: native 00:11:17.686 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:11:17.686 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:11:17.686 enable_docs : false 00:11:17.686 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:11:17.686 enable_kmods : false 00:11:17.686 tests : false 00:11:17.686 00:11:17.686 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:17.686 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:11:17.686 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:11:17.686 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:11:17.686 [3/265] Linking static target lib/librte_kvargs.a 00:11:17.686 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:11:17.686 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:11:17.686 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:11:17.686 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:11:17.686 [8/265] Linking static target lib/librte_log.a 00:11:17.944 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:11:17.944 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:11:17.944 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:11:18.203 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:11:18.203 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:11:18.203 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:11:18.203 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:11:18.203 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:11:18.203 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:11:18.461 [18/265] Linking static target lib/librte_telemetry.a 00:11:18.461 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:11:18.461 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:11:18.461 [21/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:11:18.461 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:11:18.461 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:11:18.461 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:11:18.720 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:11:18.720 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:11:18.720 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:11:18.995 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:11:18.995 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:11:18.995 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:11:18.995 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:11:18.995 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:11:18.995 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:11:18.995 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:11:19.253 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:11:19.253 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:11:19.253 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:11:19.253 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:11:19.253 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:11:19.253 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:11:19.511 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:11:19.511 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:11:19.511 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:11:19.768 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:11:19.768 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:11:19.768 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:11:19.768 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:11:19.768 [48/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:11:19.768 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:11:19.768 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:11:20.027 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:11:20.027 [52/265] Linking target lib/librte_log.so.24.0 00:11:20.027 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:11:20.027 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:11:20.027 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:11:20.027 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:11:20.027 [57/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:11:20.027 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:11:20.027 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:11:20.285 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:11:20.285 [61/265] Linking target lib/librte_kvargs.so.24.0 00:11:20.285 [62/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:11:20.285 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:11:20.285 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:11:20.285 [65/265] Linking target lib/librte_telemetry.so.24.0 00:11:20.543 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:11:20.543 [67/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:11:20.543 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:11:20.543 [69/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:11:20.543 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:11:20.543 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:11:20.543 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:11:20.802 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:11:20.802 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:11:20.802 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:11:20.802 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:11:20.802 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:11:20.802 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:11:21.061 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:11:21.061 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:11:21.061 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:11:21.061 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:11:21.320 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:11:21.320 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:11:21.320 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:11:21.320 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:11:21.320 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:11:21.320 [88/265] Linking static target lib/librte_ring.a 00:11:21.320 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:11:21.320 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:11:21.578 [91/265] Linking static target lib/librte_eal.a 00:11:21.578 [92/265] Linking static target lib/librte_mempool.a 00:11:21.578 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:11:21.578 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:11:21.578 [95/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:11:21.837 [96/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:11:21.837 [97/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:11:21.837 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:11:21.837 [99/265] Linking static target lib/librte_rcu.a 00:11:21.837 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:11:22.095 [101/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:11:22.095 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:11:22.095 [103/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:11:22.369 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:11:22.369 [105/265] Linking static target lib/librte_meter.a 00:11:22.369 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:11:22.370 [107/265] Linking static target lib/librte_net.a 00:11:22.370 [108/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:11:22.370 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:11:22.370 [110/265] Linking static target lib/librte_mbuf.a 00:11:22.629 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:11:22.629 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:11:22.629 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:11:22.629 [114/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:11:22.890 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:11:22.890 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:11:22.890 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:11:22.890 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:11:23.152 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:11:23.412 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:11:23.412 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:11:23.671 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:11:23.930 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:11:23.930 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:11:23.930 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:11:23.930 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:11:23.930 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:11:23.930 [128/265] Linking static target lib/librte_pci.a 00:11:23.930 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:11:23.930 [130/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:11:24.188 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:11:24.188 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:11:24.188 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:11:24.188 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:11:24.188 [135/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:24.188 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:11:24.188 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:11:24.188 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:11:24.445 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:11:24.445 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:11:24.445 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:11:24.445 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:11:24.445 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:11:24.445 [144/265] Linking static target lib/librte_cmdline.a 00:11:24.445 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:11:24.703 [146/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:24.703 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:11:24.703 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:11:24.703 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:11:24.703 [150/265] Linking static target lib/librte_timer.a 00:11:24.703 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:11:24.961 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:11:24.961 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:11:24.961 [154/265] Linking static target lib/librte_ethdev.a 00:11:25.218 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:11:25.218 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:25.218 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:25.218 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:11:25.218 [159/265] Linking static target lib/librte_compressdev.a 00:11:25.476 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:25.476 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:25.476 [162/265] Linking static target lib/librte_dmadev.a 00:11:25.476 [163/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:11:25.476 [164/265] Linking static target lib/librte_hash.a 00:11:25.734 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:25.734 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:25.734 [167/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:25.734 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:25.992 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:26.250 [170/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:26.250 [171/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:26.250 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:26.250 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:26.250 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:26.250 [175/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:26.250 [176/265] Linking static target lib/librte_cryptodev.a 00:11:26.508 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:26.508 [178/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:26.508 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:26.508 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:26.508 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:26.508 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:26.508 [183/265] Linking static target lib/librte_power.a 00:11:26.766 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:26.766 [185/265] Linking static target lib/librte_reorder.a 00:11:26.766 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:26.766 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:27.025 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:27.025 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:27.025 [190/265] Linking static target lib/librte_security.a 00:11:27.283 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:27.541 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:27.541 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:27.541 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:27.541 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:27.541 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:27.799 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:27.799 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:27.799 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:28.056 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:28.056 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:28.056 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:28.056 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:28.056 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:28.313 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:28.313 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:28.314 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:28.314 [208/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:28.314 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:28.314 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:28.314 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:28.314 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:28.314 [213/265] Linking static target drivers/librte_bus_vdev.a 00:11:28.571 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:28.571 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:28.571 [216/265] Linking static target drivers/librte_bus_pci.a 00:11:28.571 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:28.571 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:28.571 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:28.829 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:28.829 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:28.829 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:28.829 [223/265] Linking static target drivers/librte_mempool_ring.a 00:11:29.087 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:31.620 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:32.555 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:33.932 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:33.932 [228/265] Linking target lib/librte_eal.so.24.0 00:11:33.932 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:11:33.932 [230/265] Linking target lib/librte_pci.so.24.0 00:11:33.932 [231/265] Linking target lib/librte_timer.so.24.0 00:11:33.932 [232/265] Linking target lib/librte_ring.so.24.0 00:11:33.932 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:11:33.932 [234/265] Linking target lib/librte_dmadev.so.24.0 00:11:33.932 [235/265] Linking target lib/librte_meter.so.24.0 00:11:34.191 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:11:34.191 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:11:34.191 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:11:34.191 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:11:34.191 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:11:34.191 [241/265] Linking target lib/librte_mempool.so.24.0 00:11:34.191 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:11:34.191 [243/265] Linking target lib/librte_rcu.so.24.0 00:11:34.191 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:11:34.450 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:11:34.450 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:11:34.450 [247/265] Linking target lib/librte_mbuf.so.24.0 00:11:34.450 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:11:34.709 [249/265] Linking target lib/librte_reorder.so.24.0 00:11:34.709 [250/265] Linking target lib/librte_cryptodev.so.24.0 00:11:34.709 [251/265] Linking target lib/librte_compressdev.so.24.0 00:11:34.709 [252/265] Linking target lib/librte_net.so.24.0 00:11:34.709 [253/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:34.709 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:11:34.709 [255/265] Linking static target lib/librte_vhost.a 00:11:34.709 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:11:34.968 [257/265] Linking target lib/librte_cmdline.so.24.0 00:11:34.968 [258/265] Linking target lib/librte_hash.so.24.0 00:11:34.968 [259/265] Linking target lib/librte_security.so.24.0 00:11:34.968 [260/265] Linking target lib/librte_ethdev.so.24.0 00:11:34.968 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:11:34.968 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:11:35.227 [263/265] Linking target lib/librte_power.so.24.0 00:11:36.607 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.607 [265/265] Linking target lib/librte_vhost.so.24.0 00:11:36.607 INFO: autodetecting backend as ninja 00:11:36.607 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:11:37.983 CC lib/ut_mock/mock.o 00:11:37.983 CC lib/log/log_flags.o 00:11:37.983 CC lib/log/log.o 00:11:37.983 CC lib/log/log_deprecated.o 00:11:37.983 CC lib/ut/ut.o 00:11:37.983 LIB libspdk_ut_mock.a 00:11:37.983 LIB libspdk_ut.a 00:11:37.983 LIB libspdk_log.a 00:11:38.242 CXX lib/trace_parser/trace.o 00:11:38.242 CC lib/util/base64.o 00:11:38.242 CC lib/util/bit_array.o 00:11:38.242 CC lib/util/cpuset.o 00:11:38.242 CC lib/util/crc16.o 00:11:38.242 CC lib/util/crc32c.o 00:11:38.242 CC lib/util/crc32.o 00:11:38.242 CC lib/dma/dma.o 00:11:38.242 CC lib/ioat/ioat.o 00:11:38.242 CC lib/vfio_user/host/vfio_user_pci.o 00:11:38.242 CC lib/util/crc32_ieee.o 00:11:38.242 CC lib/vfio_user/host/vfio_user.o 00:11:38.242 CC lib/util/crc64.o 00:11:38.242 CC lib/util/dif.o 00:11:38.242 LIB libspdk_dma.a 00:11:38.500 CC lib/util/fd.o 00:11:38.500 CC lib/util/file.o 00:11:38.500 CC lib/util/hexlify.o 00:11:38.500 CC lib/util/iov.o 00:11:38.500 LIB libspdk_ioat.a 00:11:38.500 CC lib/util/math.o 00:11:38.500 CC lib/util/pipe.o 00:11:38.500 CC lib/util/strerror_tls.o 00:11:38.500 CC lib/util/string.o 00:11:38.500 CC lib/util/uuid.o 00:11:38.500 CC lib/util/fd_group.o 00:11:38.759 CC lib/util/xor.o 00:11:38.759 LIB libspdk_vfio_user.a 00:11:38.759 CC lib/util/zipf.o 00:11:39.017 LIB libspdk_util.a 00:11:39.276 LIB libspdk_trace_parser.a 00:11:39.276 CC lib/rdma/common.o 00:11:39.276 CC lib/rdma/rdma_verbs.o 00:11:39.276 CC lib/vmd/vmd.o 00:11:39.276 CC lib/vmd/led.o 00:11:39.276 CC lib/idxd/idxd.o 00:11:39.276 CC lib/idxd/idxd_user.o 00:11:39.276 CC lib/conf/conf.o 00:11:39.276 CC lib/json/json_util.o 00:11:39.276 CC lib/json/json_parse.o 00:11:39.276 CC lib/env_dpdk/env.o 00:11:39.535 CC lib/json/json_write.o 00:11:39.535 CC lib/env_dpdk/memory.o 00:11:39.535 LIB libspdk_conf.a 00:11:39.535 CC lib/env_dpdk/pci.o 00:11:39.535 CC lib/env_dpdk/init.o 00:11:39.535 CC lib/env_dpdk/threads.o 00:11:39.535 LIB libspdk_rdma.a 00:11:39.535 CC lib/env_dpdk/pci_ioat.o 00:11:39.535 CC lib/env_dpdk/pci_virtio.o 00:11:39.793 CC lib/env_dpdk/pci_vmd.o 00:11:39.793 CC lib/env_dpdk/pci_idxd.o 00:11:39.793 LIB libspdk_json.a 00:11:39.793 CC lib/env_dpdk/pci_event.o 00:11:39.793 CC lib/env_dpdk/sigbus_handler.o 00:11:39.793 CC lib/env_dpdk/pci_dpdk.o 00:11:40.051 CC lib/env_dpdk/pci_dpdk_2207.o 00:11:40.051 CC lib/env_dpdk/pci_dpdk_2211.o 00:11:40.051 LIB libspdk_idxd.a 00:11:40.051 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:11:40.051 CC lib/jsonrpc/jsonrpc_client.o 00:11:40.051 CC lib/jsonrpc/jsonrpc_server.o 00:11:40.051 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:11:40.051 LIB libspdk_vmd.a 00:11:40.620 LIB libspdk_jsonrpc.a 00:11:40.620 CC lib/rpc/rpc.o 00:11:40.888 LIB libspdk_rpc.a 00:11:41.144 LIB libspdk_env_dpdk.a 00:11:41.144 CC lib/trace/trace.o 00:11:41.144 CC lib/trace/trace_rpc.o 00:11:41.144 CC lib/trace/trace_flags.o 00:11:41.144 CC lib/keyring/keyring.o 00:11:41.144 CC lib/keyring/keyring_rpc.o 00:11:41.144 CC lib/notify/notify_rpc.o 00:11:41.144 CC lib/notify/notify.o 00:11:41.402 LIB libspdk_notify.a 00:11:41.659 LIB libspdk_keyring.a 00:11:41.659 LIB libspdk_trace.a 00:11:41.968 CC lib/sock/sock.o 00:11:41.968 CC lib/sock/sock_rpc.o 00:11:41.968 CC lib/thread/thread.o 00:11:41.968 CC lib/thread/iobuf.o 00:11:42.916 LIB libspdk_sock.a 00:11:42.916 CC lib/nvme/nvme_ctrlr_cmd.o 00:11:42.916 CC lib/nvme/nvme_fabric.o 00:11:42.916 CC lib/nvme/nvme_ctrlr.o 00:11:42.916 CC lib/nvme/nvme_ns_cmd.o 00:11:42.916 CC lib/nvme/nvme_ns.o 00:11:42.916 CC lib/nvme/nvme_pcie_common.o 00:11:42.916 CC lib/nvme/nvme.o 00:11:42.916 CC lib/nvme/nvme_pcie.o 00:11:42.916 CC lib/nvme/nvme_qpair.o 00:11:43.845 CC lib/nvme/nvme_quirks.o 00:11:43.845 CC lib/nvme/nvme_transport.o 00:11:43.845 CC lib/nvme/nvme_discovery.o 00:11:43.845 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:11:43.845 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:11:43.845 CC lib/nvme/nvme_tcp.o 00:11:44.102 LIB libspdk_thread.a 00:11:44.102 CC lib/nvme/nvme_opal.o 00:11:44.102 CC lib/nvme/nvme_io_msg.o 00:11:44.358 CC lib/nvme/nvme_poll_group.o 00:11:44.358 CC lib/nvme/nvme_zns.o 00:11:44.358 CC lib/nvme/nvme_stubs.o 00:11:44.615 CC lib/nvme/nvme_auth.o 00:11:44.615 CC lib/accel/accel.o 00:11:44.615 CC lib/nvme/nvme_cuse.o 00:11:44.615 CC lib/nvme/nvme_rdma.o 00:11:44.872 CC lib/blob/blobstore.o 00:11:44.872 CC lib/blob/request.o 00:11:44.872 CC lib/blob/zeroes.o 00:11:45.128 CC lib/blob/blob_bs_dev.o 00:11:45.388 CC lib/accel/accel_rpc.o 00:11:45.388 CC lib/init/json_config.o 00:11:45.388 CC lib/init/subsystem.o 00:11:45.388 CC lib/virtio/virtio.o 00:11:45.645 CC lib/virtio/virtio_vhost_user.o 00:11:45.645 CC lib/virtio/virtio_vfio_user.o 00:11:45.645 CC lib/init/subsystem_rpc.o 00:11:45.902 CC lib/accel/accel_sw.o 00:11:45.902 CC lib/init/rpc.o 00:11:45.902 CC lib/virtio/virtio_pci.o 00:11:45.902 LIB libspdk_init.a 00:11:46.160 LIB libspdk_accel.a 00:11:46.160 CC lib/event/app.o 00:11:46.160 CC lib/event/log_rpc.o 00:11:46.160 CC lib/event/reactor.o 00:11:46.160 CC lib/event/scheduler_static.o 00:11:46.160 CC lib/event/app_rpc.o 00:11:46.418 LIB libspdk_virtio.a 00:11:46.418 LIB libspdk_nvme.a 00:11:46.418 CC lib/bdev/bdev.o 00:11:46.418 CC lib/bdev/bdev_zone.o 00:11:46.418 CC lib/bdev/bdev_rpc.o 00:11:46.418 CC lib/bdev/part.o 00:11:46.418 CC lib/bdev/scsi_nvme.o 00:11:46.984 LIB libspdk_event.a 00:11:48.882 LIB libspdk_blob.a 00:11:49.139 CC lib/lvol/lvol.o 00:11:49.397 CC lib/blobfs/blobfs.o 00:11:49.397 CC lib/blobfs/tree.o 00:11:49.961 LIB libspdk_bdev.a 00:11:50.219 CC lib/ftl/ftl_core.o 00:11:50.219 CC lib/ftl/ftl_layout.o 00:11:50.219 CC lib/ftl/ftl_io.o 00:11:50.219 CC lib/ftl/ftl_debug.o 00:11:50.219 CC lib/ftl/ftl_init.o 00:11:50.219 CC lib/nvmf/ctrlr.o 00:11:50.219 CC lib/scsi/dev.o 00:11:50.219 CC lib/nbd/nbd.o 00:11:50.219 LIB libspdk_lvol.a 00:11:50.219 LIB libspdk_blobfs.a 00:11:50.478 CC lib/nbd/nbd_rpc.o 00:11:50.478 CC lib/scsi/lun.o 00:11:50.478 CC lib/scsi/port.o 00:11:50.478 CC lib/scsi/scsi.o 00:11:50.478 CC lib/ftl/ftl_sb.o 00:11:50.478 CC lib/ftl/ftl_l2p.o 00:11:50.478 CC lib/scsi/scsi_bdev.o 00:11:50.478 CC lib/ftl/ftl_l2p_flat.o 00:11:50.478 CC lib/scsi/scsi_pr.o 00:11:50.478 CC lib/scsi/scsi_rpc.o 00:11:50.737 CC lib/nvmf/ctrlr_discovery.o 00:11:50.737 LIB libspdk_nbd.a 00:11:50.737 CC lib/scsi/task.o 00:11:50.737 CC lib/ftl/ftl_nv_cache.o 00:11:50.737 CC lib/ftl/ftl_band.o 00:11:50.737 CC lib/ftl/ftl_band_ops.o 00:11:50.737 CC lib/ftl/ftl_writer.o 00:11:50.737 CC lib/ftl/ftl_rq.o 00:11:50.996 CC lib/ftl/ftl_reloc.o 00:11:50.996 CC lib/ftl/ftl_l2p_cache.o 00:11:50.996 CC lib/ftl/ftl_p2l.o 00:11:50.996 LIB libspdk_scsi.a 00:11:50.996 CC lib/ftl/mngt/ftl_mngt.o 00:11:50.996 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:51.255 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:51.255 CC lib/nvmf/ctrlr_bdev.o 00:11:51.255 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:51.255 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:51.255 CC lib/nvmf/subsystem.o 00:11:51.255 CC lib/iscsi/conn.o 00:11:51.514 CC lib/iscsi/init_grp.o 00:11:51.514 CC lib/iscsi/iscsi.o 00:11:51.514 CC lib/iscsi/md5.o 00:11:51.514 CC lib/iscsi/param.o 00:11:51.514 CC lib/iscsi/portal_grp.o 00:11:51.773 CC lib/nvmf/nvmf.o 00:11:51.773 CC lib/nvmf/nvmf_rpc.o 00:11:51.773 CC lib/iscsi/tgt_node.o 00:11:52.031 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:52.031 CC lib/vhost/vhost.o 00:11:52.031 CC lib/nvmf/transport.o 00:11:52.031 CC lib/nvmf/tcp.o 00:11:52.031 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:52.289 CC lib/nvmf/rdma.o 00:11:52.289 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:52.289 CC lib/vhost/vhost_rpc.o 00:11:52.547 CC lib/vhost/vhost_scsi.o 00:11:52.547 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:52.547 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:52.805 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:52.805 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:52.805 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:52.805 CC lib/vhost/vhost_blk.o 00:11:52.805 CC lib/vhost/rte_vhost_user.o 00:11:53.063 CC lib/ftl/utils/ftl_conf.o 00:11:53.063 CC lib/ftl/utils/ftl_md.o 00:11:53.063 CC lib/ftl/utils/ftl_mempool.o 00:11:53.063 CC lib/iscsi/iscsi_subsystem.o 00:11:53.321 CC lib/ftl/utils/ftl_bitmap.o 00:11:53.321 CC lib/ftl/utils/ftl_property.o 00:11:53.321 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:53.321 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:53.321 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:53.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:53.580 CC lib/iscsi/iscsi_rpc.o 00:11:53.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:53.580 CC lib/iscsi/task.o 00:11:53.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:53.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:53.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:53.837 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:53.837 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:53.837 CC lib/ftl/base/ftl_base_dev.o 00:11:53.837 CC lib/ftl/base/ftl_base_bdev.o 00:11:53.837 CC lib/ftl/ftl_trace.o 00:11:54.095 LIB libspdk_vhost.a 00:11:54.095 LIB libspdk_iscsi.a 00:11:54.095 LIB libspdk_ftl.a 00:11:55.027 LIB libspdk_nvmf.a 00:11:55.284 CC module/env_dpdk/env_dpdk_rpc.o 00:11:55.284 CC module/blob/bdev/blob_bdev.o 00:11:55.542 CC module/accel/iaa/accel_iaa.o 00:11:55.542 CC module/accel/error/accel_error.o 00:11:55.542 CC module/accel/ioat/accel_ioat.o 00:11:55.542 CC module/accel/dsa/accel_dsa.o 00:11:55.542 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:55.542 CC module/keyring/file/keyring.o 00:11:55.542 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:55.542 CC module/sock/posix/posix.o 00:11:55.542 LIB libspdk_env_dpdk_rpc.a 00:11:55.542 CC module/keyring/file/keyring_rpc.o 00:11:55.542 CC module/accel/iaa/accel_iaa_rpc.o 00:11:55.542 CC module/accel/error/accel_error_rpc.o 00:11:55.542 LIB libspdk_scheduler_dpdk_governor.a 00:11:55.542 LIB libspdk_scheduler_dynamic.a 00:11:55.799 CC module/accel/dsa/accel_dsa_rpc.o 00:11:55.799 LIB libspdk_keyring_file.a 00:11:55.799 CC module/accel/ioat/accel_ioat_rpc.o 00:11:55.799 LIB libspdk_accel_iaa.a 00:11:55.799 LIB libspdk_accel_error.a 00:11:55.799 LIB libspdk_blob_bdev.a 00:11:55.799 CC module/scheduler/gscheduler/gscheduler.o 00:11:55.799 LIB libspdk_accel_ioat.a 00:11:55.799 CC module/keyring/linux/keyring_rpc.o 00:11:55.799 CC module/keyring/linux/keyring.o 00:11:55.799 LIB libspdk_accel_dsa.a 00:11:56.060 LIB libspdk_scheduler_gscheduler.a 00:11:56.060 CC module/bdev/error/vbdev_error.o 00:11:56.060 CC module/bdev/gpt/gpt.o 00:11:56.060 CC module/bdev/error/vbdev_error_rpc.o 00:11:56.060 CC module/blobfs/bdev/blobfs_bdev.o 00:11:56.060 LIB libspdk_keyring_linux.a 00:11:56.060 CC module/bdev/lvol/vbdev_lvol.o 00:11:56.060 CC module/bdev/delay/vbdev_delay.o 00:11:56.060 CC module/bdev/malloc/bdev_malloc.o 00:11:56.060 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:56.322 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:56.322 CC module/bdev/null/bdev_null.o 00:11:56.322 CC module/bdev/gpt/vbdev_gpt.o 00:11:56.322 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:56.322 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:56.322 LIB libspdk_bdev_error.a 00:11:56.580 LIB libspdk_sock_posix.a 00:11:56.580 LIB libspdk_bdev_delay.a 00:11:56.580 LIB libspdk_blobfs_bdev.a 00:11:56.580 CC module/bdev/null/bdev_null_rpc.o 00:11:56.580 CC module/bdev/nvme/bdev_nvme.o 00:11:56.580 CC module/bdev/passthru/vbdev_passthru.o 00:11:56.580 LIB libspdk_bdev_malloc.a 00:11:56.580 LIB libspdk_bdev_gpt.a 00:11:56.837 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:56.837 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:56.837 CC module/bdev/raid/bdev_raid.o 00:11:56.837 LIB libspdk_bdev_lvol.a 00:11:56.837 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:56.837 CC module/bdev/split/vbdev_split.o 00:11:56.837 LIB libspdk_bdev_null.a 00:11:56.837 CC module/bdev/split/vbdev_split_rpc.o 00:11:56.837 CC module/bdev/raid/bdev_raid_rpc.o 00:11:56.837 CC module/bdev/raid/bdev_raid_sb.o 00:11:56.837 CC module/bdev/aio/bdev_aio.o 00:11:57.095 LIB libspdk_bdev_passthru.a 00:11:57.095 CC module/bdev/aio/bdev_aio_rpc.o 00:11:57.095 LIB libspdk_bdev_split.a 00:11:57.095 CC module/bdev/raid/raid0.o 00:11:57.095 CC module/bdev/raid/raid1.o 00:11:57.095 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:57.095 CC module/bdev/raid/concat.o 00:11:57.353 CC module/bdev/raid/raid5f.o 00:11:57.353 LIB libspdk_bdev_zone_block.a 00:11:57.353 CC module/bdev/nvme/nvme_rpc.o 00:11:57.353 CC module/bdev/nvme/bdev_mdns_client.o 00:11:57.353 LIB libspdk_bdev_aio.a 00:11:57.353 CC module/bdev/nvme/vbdev_opal.o 00:11:57.353 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:57.611 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:57.611 CC module/bdev/ftl/bdev_ftl.o 00:11:57.611 CC module/bdev/iscsi/bdev_iscsi.o 00:11:57.611 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:57.868 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:57.868 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:57.868 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:57.868 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:57.868 LIB libspdk_bdev_raid.a 00:11:58.125 LIB libspdk_bdev_ftl.a 00:11:58.125 LIB libspdk_bdev_iscsi.a 00:11:58.691 LIB libspdk_bdev_virtio.a 00:11:59.255 LIB libspdk_bdev_nvme.a 00:11:59.819 CC module/event/subsystems/iobuf/iobuf.o 00:11:59.819 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:59.819 CC module/event/subsystems/sock/sock.o 00:11:59.819 CC module/event/subsystems/scheduler/scheduler.o 00:11:59.819 CC module/event/subsystems/keyring/keyring.o 00:11:59.819 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:59.819 CC module/event/subsystems/vmd/vmd.o 00:11:59.819 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:59.819 LIB libspdk_event_vhost_blk.a 00:11:59.819 LIB libspdk_event_keyring.a 00:11:59.819 LIB libspdk_event_scheduler.a 00:12:00.077 LIB libspdk_event_sock.a 00:12:00.077 LIB libspdk_event_iobuf.a 00:12:00.077 LIB libspdk_event_vmd.a 00:12:00.335 CC module/event/subsystems/accel/accel.o 00:12:00.593 LIB libspdk_event_accel.a 00:12:00.851 CC module/event/subsystems/bdev/bdev.o 00:12:01.108 LIB libspdk_event_bdev.a 00:12:01.366 CC module/event/subsystems/nbd/nbd.o 00:12:01.366 CC module/event/subsystems/scsi/scsi.o 00:12:01.366 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:01.366 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:01.623 LIB libspdk_event_nbd.a 00:12:01.623 LIB libspdk_event_scsi.a 00:12:01.623 LIB libspdk_event_nvmf.a 00:12:01.880 CC module/event/subsystems/iscsi/iscsi.o 00:12:01.880 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:12:01.880 LIB libspdk_event_vhost_scsi.a 00:12:02.137 LIB libspdk_event_iscsi.a 00:12:02.394 CC app/trace_record/trace_record.o 00:12:02.394 CXX app/trace/trace.o 00:12:02.394 CC app/iscsi_tgt/iscsi_tgt.o 00:12:02.394 CC app/nvmf_tgt/nvmf_main.o 00:12:02.394 CC examples/accel/perf/accel_perf.o 00:12:02.394 CC examples/ioat/perf/perf.o 00:12:02.394 CC examples/nvme/hello_world/hello_world.o 00:12:02.394 CC test/accel/dif/dif.o 00:12:02.394 CC examples/bdev/hello_world/hello_bdev.o 00:12:02.394 CC examples/blob/hello_world/hello_blob.o 00:12:02.651 LINK nvmf_tgt 00:12:02.651 LINK iscsi_tgt 00:12:02.651 LINK spdk_trace_record 00:12:02.651 LINK ioat_perf 00:12:02.651 LINK hello_world 00:12:02.651 LINK hello_bdev 00:12:02.651 LINK spdk_trace 00:12:02.651 LINK hello_blob 00:12:02.908 LINK dif 00:12:02.908 LINK accel_perf 00:12:03.165 CC examples/blob/cli/blobcli.o 00:12:03.423 CC test/app/bdev_svc/bdev_svc.o 00:12:03.681 CC examples/ioat/verify/verify.o 00:12:03.681 LINK bdev_svc 00:12:03.681 LINK blobcli 00:12:03.939 LINK verify 00:12:04.195 CC test/bdev/bdevio/bdevio.o 00:12:04.453 CC examples/nvme/reconnect/reconnect.o 00:12:04.710 CC examples/bdev/bdevperf/bdevperf.o 00:12:04.710 LINK bdevio 00:12:04.710 LINK reconnect 00:12:05.643 LINK bdevperf 00:12:06.211 CC examples/sock/hello_world/hello_sock.o 00:12:06.211 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:06.469 CC examples/nvme/arbitration/arbitration.o 00:12:06.469 CC examples/nvme/hotplug/hotplug.o 00:12:06.469 LINK hello_sock 00:12:06.751 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:06.751 LINK arbitration 00:12:06.751 LINK hotplug 00:12:06.751 LINK cmb_copy 00:12:07.009 LINK nvme_manage 00:12:07.009 CC app/spdk_tgt/spdk_tgt.o 00:12:07.266 LINK spdk_tgt 00:12:07.266 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:07.525 CC app/spdk_lspci/spdk_lspci.o 00:12:07.525 CC app/spdk_nvme_perf/perf.o 00:12:07.525 LINK spdk_lspci 00:12:08.089 LINK nvme_fuzz 00:12:08.089 CC app/spdk_nvme_identify/identify.o 00:12:08.347 TEST_HEADER include/spdk/accel.h 00:12:08.347 TEST_HEADER include/spdk/accel_module.h 00:12:08.347 TEST_HEADER include/spdk/assert.h 00:12:08.347 TEST_HEADER include/spdk/barrier.h 00:12:08.347 TEST_HEADER include/spdk/base64.h 00:12:08.347 TEST_HEADER include/spdk/bdev.h 00:12:08.347 TEST_HEADER include/spdk/bdev_module.h 00:12:08.347 TEST_HEADER include/spdk/bdev_zone.h 00:12:08.347 TEST_HEADER include/spdk/bit_array.h 00:12:08.347 TEST_HEADER include/spdk/bit_pool.h 00:12:08.347 TEST_HEADER include/spdk/blob.h 00:12:08.347 TEST_HEADER include/spdk/blob_bdev.h 00:12:08.347 TEST_HEADER include/spdk/blobfs.h 00:12:08.347 TEST_HEADER include/spdk/blobfs_bdev.h 00:12:08.347 TEST_HEADER include/spdk/conf.h 00:12:08.347 TEST_HEADER include/spdk/config.h 00:12:08.347 TEST_HEADER include/spdk/cpuset.h 00:12:08.347 TEST_HEADER include/spdk/crc16.h 00:12:08.347 TEST_HEADER include/spdk/crc32.h 00:12:08.347 TEST_HEADER include/spdk/crc64.h 00:12:08.347 TEST_HEADER include/spdk/dif.h 00:12:08.347 TEST_HEADER include/spdk/dma.h 00:12:08.347 TEST_HEADER include/spdk/endian.h 00:12:08.347 TEST_HEADER include/spdk/env.h 00:12:08.347 TEST_HEADER include/spdk/env_dpdk.h 00:12:08.347 TEST_HEADER include/spdk/event.h 00:12:08.347 TEST_HEADER include/spdk/fd.h 00:12:08.347 TEST_HEADER include/spdk/fd_group.h 00:12:08.347 TEST_HEADER include/spdk/file.h 00:12:08.347 TEST_HEADER include/spdk/ftl.h 00:12:08.347 TEST_HEADER include/spdk/gpt_spec.h 00:12:08.347 TEST_HEADER include/spdk/hexlify.h 00:12:08.347 TEST_HEADER include/spdk/histogram_data.h 00:12:08.347 TEST_HEADER include/spdk/idxd.h 00:12:08.347 TEST_HEADER include/spdk/idxd_spec.h 00:12:08.347 TEST_HEADER include/spdk/init.h 00:12:08.347 TEST_HEADER include/spdk/ioat.h 00:12:08.347 TEST_HEADER include/spdk/ioat_spec.h 00:12:08.347 TEST_HEADER include/spdk/iscsi_spec.h 00:12:08.347 TEST_HEADER include/spdk/json.h 00:12:08.347 TEST_HEADER include/spdk/jsonrpc.h 00:12:08.347 TEST_HEADER include/spdk/keyring.h 00:12:08.347 TEST_HEADER include/spdk/keyring_module.h 00:12:08.347 TEST_HEADER include/spdk/likely.h 00:12:08.347 TEST_HEADER include/spdk/log.h 00:12:08.347 TEST_HEADER include/spdk/lvol.h 00:12:08.347 TEST_HEADER include/spdk/memory.h 00:12:08.347 TEST_HEADER include/spdk/mmio.h 00:12:08.347 CC test/blobfs/mkfs/mkfs.o 00:12:08.347 TEST_HEADER include/spdk/nbd.h 00:12:08.347 TEST_HEADER include/spdk/notify.h 00:12:08.347 TEST_HEADER include/spdk/nvme.h 00:12:08.347 CC app/spdk_nvme_discover/discovery_aer.o 00:12:08.347 TEST_HEADER include/spdk/nvme_intel.h 00:12:08.347 TEST_HEADER include/spdk/nvme_ocssd.h 00:12:08.347 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:12:08.347 TEST_HEADER include/spdk/nvme_spec.h 00:12:08.347 TEST_HEADER include/spdk/nvme_zns.h 00:12:08.347 TEST_HEADER include/spdk/nvmf.h 00:12:08.347 TEST_HEADER include/spdk/nvmf_cmd.h 00:12:08.347 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:12:08.347 TEST_HEADER include/spdk/nvmf_spec.h 00:12:08.347 TEST_HEADER include/spdk/nvmf_transport.h 00:12:08.347 TEST_HEADER include/spdk/opal.h 00:12:08.347 TEST_HEADER include/spdk/opal_spec.h 00:12:08.347 TEST_HEADER include/spdk/pci_ids.h 00:12:08.347 TEST_HEADER include/spdk/pipe.h 00:12:08.347 TEST_HEADER include/spdk/queue.h 00:12:08.348 TEST_HEADER include/spdk/reduce.h 00:12:08.348 TEST_HEADER include/spdk/rpc.h 00:12:08.348 TEST_HEADER include/spdk/scheduler.h 00:12:08.348 TEST_HEADER include/spdk/scsi.h 00:12:08.348 TEST_HEADER include/spdk/scsi_spec.h 00:12:08.348 TEST_HEADER include/spdk/sock.h 00:12:08.348 TEST_HEADER include/spdk/stdinc.h 00:12:08.348 TEST_HEADER include/spdk/string.h 00:12:08.348 TEST_HEADER include/spdk/thread.h 00:12:08.348 TEST_HEADER include/spdk/trace.h 00:12:08.348 TEST_HEADER include/spdk/trace_parser.h 00:12:08.348 TEST_HEADER include/spdk/tree.h 00:12:08.348 TEST_HEADER include/spdk/ublk.h 00:12:08.348 TEST_HEADER include/spdk/util.h 00:12:08.605 TEST_HEADER include/spdk/uuid.h 00:12:08.605 TEST_HEADER include/spdk/version.h 00:12:08.605 TEST_HEADER include/spdk/vfio_user_pci.h 00:12:08.605 TEST_HEADER include/spdk/vfio_user_spec.h 00:12:08.605 TEST_HEADER include/spdk/vhost.h 00:12:08.605 TEST_HEADER include/spdk/vmd.h 00:12:08.605 TEST_HEADER include/spdk/xor.h 00:12:08.605 TEST_HEADER include/spdk/zipf.h 00:12:08.605 CXX test/cpp_headers/accel.o 00:12:08.605 CC examples/nvme/abort/abort.o 00:12:08.605 LINK mkfs 00:12:08.605 LINK spdk_nvme_discover 00:12:08.605 LINK spdk_nvme_perf 00:12:08.605 CXX test/cpp_headers/accel_module.o 00:12:08.874 LINK abort 00:12:08.874 CXX test/cpp_headers/assert.o 00:12:08.874 CXX test/cpp_headers/barrier.o 00:12:09.132 LINK spdk_nvme_identify 00:12:09.132 CXX test/cpp_headers/base64.o 00:12:09.389 CC app/spdk_top/spdk_top.o 00:12:09.389 CXX test/cpp_headers/bdev.o 00:12:09.389 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:09.646 CXX test/cpp_headers/bdev_module.o 00:12:09.646 CC app/vhost/vhost.o 00:12:09.904 CXX test/cpp_headers/bdev_zone.o 00:12:09.904 LINK vhost 00:12:10.162 CXX test/cpp_headers/bit_array.o 00:12:10.162 CXX test/cpp_headers/bit_pool.o 00:12:10.162 CXX test/cpp_headers/blob.o 00:12:10.418 LINK spdk_top 00:12:10.418 CC app/spdk_dd/spdk_dd.o 00:12:10.418 CXX test/cpp_headers/blob_bdev.o 00:12:10.418 CC app/fio/nvme/fio_plugin.o 00:12:10.675 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:10.675 CXX test/cpp_headers/blobfs.o 00:12:10.675 CXX test/cpp_headers/blobfs_bdev.o 00:12:10.675 LINK pmr_persistence 00:12:10.933 LINK spdk_dd 00:12:10.933 CXX test/cpp_headers/conf.o 00:12:11.190 CC test/dma/test_dma/test_dma.o 00:12:11.190 CXX test/cpp_headers/config.o 00:12:11.190 CXX test/cpp_headers/cpuset.o 00:12:11.190 LINK spdk_nvme 00:12:11.448 CXX test/cpp_headers/crc16.o 00:12:11.448 CC test/env/mem_callbacks/mem_callbacks.o 00:12:11.448 CC test/env/vtophys/vtophys.o 00:12:11.448 LINK iscsi_fuzz 00:12:11.448 CXX test/cpp_headers/crc32.o 00:12:11.448 LINK test_dma 00:12:11.705 LINK vtophys 00:12:11.705 CXX test/cpp_headers/crc64.o 00:12:11.963 LINK mem_callbacks 00:12:11.963 CXX test/cpp_headers/dif.o 00:12:12.221 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:12.221 CXX test/cpp_headers/dma.o 00:12:12.479 CC examples/vmd/lsvmd/lsvmd.o 00:12:12.479 LINK env_dpdk_post_init 00:12:12.479 CXX test/cpp_headers/endian.o 00:12:12.479 CC examples/nvmf/nvmf/nvmf.o 00:12:12.479 LINK lsvmd 00:12:12.737 CC examples/util/zipf/zipf.o 00:12:12.737 CC app/fio/bdev/fio_plugin.o 00:12:12.994 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:12:12.994 LINK zipf 00:12:12.994 CXX test/cpp_headers/env.o 00:12:12.994 LINK nvmf 00:12:12.994 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:12:13.253 CXX test/cpp_headers/env_dpdk.o 00:12:13.253 CXX test/cpp_headers/event.o 00:12:13.511 LINK spdk_bdev 00:12:13.511 LINK vhost_fuzz 00:12:13.511 CXX test/cpp_headers/fd.o 00:12:13.511 CC test/env/memory/memory_ut.o 00:12:13.770 CC test/event/event_perf/event_perf.o 00:12:13.770 CXX test/cpp_headers/fd_group.o 00:12:13.770 LINK event_perf 00:12:13.770 CC examples/vmd/led/led.o 00:12:14.029 CXX test/cpp_headers/file.o 00:12:14.029 CXX test/cpp_headers/ftl.o 00:12:14.029 LINK led 00:12:14.287 CXX test/cpp_headers/gpt_spec.o 00:12:14.287 CC test/env/pci/pci_ut.o 00:12:14.287 CXX test/cpp_headers/hexlify.o 00:12:14.287 LINK memory_ut 00:12:14.545 CC test/lvol/esnap/esnap.o 00:12:14.545 CXX test/cpp_headers/histogram_data.o 00:12:14.545 CC test/app/histogram_perf/histogram_perf.o 00:12:14.545 CXX test/cpp_headers/idxd.o 00:12:14.804 LINK pci_ut 00:12:14.804 LINK histogram_perf 00:12:14.804 CXX test/cpp_headers/idxd_spec.o 00:12:14.804 CC test/event/reactor/reactor.o 00:12:14.804 CC test/event/reactor_perf/reactor_perf.o 00:12:14.804 CC test/nvme/aer/aer.o 00:12:15.062 LINK reactor 00:12:15.063 CXX test/cpp_headers/init.o 00:12:15.063 LINK reactor_perf 00:12:15.063 CXX test/cpp_headers/ioat.o 00:12:15.063 CXX test/cpp_headers/ioat_spec.o 00:12:15.321 LINK aer 00:12:15.321 CXX test/cpp_headers/iscsi_spec.o 00:12:15.321 CC test/nvme/reset/reset.o 00:12:15.321 CXX test/cpp_headers/json.o 00:12:15.638 CC test/app/jsoncat/jsoncat.o 00:12:15.638 CC test/app/stub/stub.o 00:12:15.638 CXX test/cpp_headers/jsonrpc.o 00:12:15.638 LINK reset 00:12:15.638 LINK jsoncat 00:12:15.923 CXX test/cpp_headers/keyring.o 00:12:15.923 LINK stub 00:12:15.923 CXX test/cpp_headers/keyring_module.o 00:12:15.923 CC test/event/app_repeat/app_repeat.o 00:12:15.923 CC test/nvme/sgl/sgl.o 00:12:16.182 LINK app_repeat 00:12:16.182 CXX test/cpp_headers/likely.o 00:12:16.440 LINK sgl 00:12:16.440 CXX test/cpp_headers/log.o 00:12:16.440 CXX test/cpp_headers/lvol.o 00:12:16.698 CC examples/idxd/perf/perf.o 00:12:16.698 CC examples/thread/thread/thread_ex.o 00:12:16.698 CXX test/cpp_headers/memory.o 00:12:16.956 CC examples/interrupt_tgt/interrupt_tgt.o 00:12:16.956 CC test/nvme/e2edp/nvme_dp.o 00:12:16.956 CXX test/cpp_headers/mmio.o 00:12:16.956 LINK thread 00:12:17.215 LINK interrupt_tgt 00:12:17.215 CXX test/cpp_headers/nbd.o 00:12:17.215 CXX test/cpp_headers/notify.o 00:12:17.215 CC test/nvme/overhead/overhead.o 00:12:17.215 LINK idxd_perf 00:12:17.215 LINK nvme_dp 00:12:17.215 CC test/nvme/err_injection/err_injection.o 00:12:17.473 CXX test/cpp_headers/nvme.o 00:12:17.473 LINK overhead 00:12:17.473 LINK err_injection 00:12:17.473 CXX test/cpp_headers/nvme_intel.o 00:12:17.473 CC test/event/scheduler/scheduler.o 00:12:17.731 CC test/nvme/startup/startup.o 00:12:17.731 CXX test/cpp_headers/nvme_ocssd.o 00:12:17.731 LINK startup 00:12:17.989 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:17.989 LINK scheduler 00:12:18.247 CXX test/cpp_headers/nvme_spec.o 00:12:18.247 CC test/nvme/reserve/reserve.o 00:12:18.505 CXX test/cpp_headers/nvme_zns.o 00:12:18.505 CXX test/cpp_headers/nvmf.o 00:12:18.505 CXX test/cpp_headers/nvmf_cmd.o 00:12:18.762 LINK reserve 00:12:18.762 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:18.762 CC test/nvme/simple_copy/simple_copy.o 00:12:18.762 CC test/nvme/connect_stress/connect_stress.o 00:12:19.020 CC test/nvme/boot_partition/boot_partition.o 00:12:19.020 CXX test/cpp_headers/nvmf_spec.o 00:12:19.020 LINK connect_stress 00:12:19.020 LINK boot_partition 00:12:19.020 LINK simple_copy 00:12:19.020 CC test/nvme/compliance/nvme_compliance.o 00:12:19.278 CXX test/cpp_headers/nvmf_transport.o 00:12:19.278 CXX test/cpp_headers/opal.o 00:12:19.537 CXX test/cpp_headers/opal_spec.o 00:12:19.537 CC test/rpc_client/rpc_client_test.o 00:12:19.537 LINK nvme_compliance 00:12:19.537 CXX test/cpp_headers/pci_ids.o 00:12:19.795 LINK rpc_client_test 00:12:19.795 CXX test/cpp_headers/pipe.o 00:12:20.054 CXX test/cpp_headers/queue.o 00:12:20.054 CXX test/cpp_headers/reduce.o 00:12:20.054 CXX test/cpp_headers/rpc.o 00:12:20.054 CXX test/cpp_headers/scheduler.o 00:12:20.054 LINK esnap 00:12:20.313 CC test/nvme/fused_ordering/fused_ordering.o 00:12:20.313 CXX test/cpp_headers/scsi.o 00:12:20.313 CXX test/cpp_headers/scsi_spec.o 00:12:20.313 CC test/nvme/doorbell_aers/doorbell_aers.o 00:12:20.313 CC test/thread/poller_perf/poller_perf.o 00:12:20.313 CC test/nvme/fdp/fdp.o 00:12:20.313 LINK fused_ordering 00:12:20.313 CXX test/cpp_headers/sock.o 00:12:20.571 LINK poller_perf 00:12:20.571 LINK doorbell_aers 00:12:20.571 CC test/nvme/cuse/cuse.o 00:12:20.571 CXX test/cpp_headers/stdinc.o 00:12:20.829 LINK fdp 00:12:20.829 CXX test/cpp_headers/string.o 00:12:20.829 CXX test/cpp_headers/thread.o 00:12:20.829 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:12:20.829 CXX test/cpp_headers/trace.o 00:12:20.829 LINK histogram_ut 00:12:21.086 CC test/unit/lib/accel/accel.c/accel_ut.o 00:12:21.086 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:12:21.086 CXX test/cpp_headers/trace_parser.o 00:12:21.086 CXX test/cpp_headers/tree.o 00:12:21.345 CXX test/cpp_headers/ublk.o 00:12:21.345 CXX test/cpp_headers/util.o 00:12:21.345 CC test/unit/lib/bdev/part.c/part_ut.o 00:12:21.345 CC test/thread/lock/spdk_lock.o 00:12:21.345 CXX test/cpp_headers/uuid.o 00:12:21.345 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:12:21.602 LINK cuse 00:12:21.602 CXX test/cpp_headers/version.o 00:12:21.602 CXX test/cpp_headers/vfio_user_pci.o 00:12:21.602 CXX test/cpp_headers/vfio_user_spec.o 00:12:21.602 LINK scsi_nvme_ut 00:12:21.860 CXX test/cpp_headers/vhost.o 00:12:21.860 CXX test/cpp_headers/vmd.o 00:12:21.860 CXX test/cpp_headers/xor.o 00:12:21.860 CXX test/cpp_headers/zipf.o 00:12:21.860 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:12:22.118 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:12:22.118 CC test/unit/lib/blob/blob.c/blob_ut.o 00:12:22.118 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:12:22.118 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:12:22.118 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:12:22.685 LINK blob_bdev_ut 00:12:22.685 LINK tree_ut 00:12:22.685 LINK gpt_ut 00:12:22.949 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:12:22.949 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:12:22.949 LINK vbdev_lvol_ut 00:12:23.213 LINK blobfs_bdev_ut 00:12:23.213 CC test/unit/lib/dma/dma.c/dma_ut.o 00:12:23.213 LINK accel_ut 00:12:23.471 LINK spdk_lock 00:12:23.471 LINK blobfs_async_ut 00:12:23.471 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:12:23.729 LINK dma_ut 00:12:23.729 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:12:23.729 CC test/unit/lib/event/app.c/app_ut.o 00:12:23.988 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:12:23.988 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:12:24.245 LINK blobfs_sync_ut 00:12:24.503 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:12:24.503 LINK ioat_ut 00:12:24.503 LINK app_ut 00:12:24.760 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:12:24.760 LINK part_ut 00:12:24.760 LINK reactor_ut 00:12:24.760 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:12:24.760 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:12:25.044 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:12:25.301 LINK bdev_raid_sb_ut 00:12:25.301 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:12:25.301 LINK concat_ut 00:12:25.301 LINK bdev_zone_ut 00:12:25.559 LINK conn_ut 00:12:25.559 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:12:25.559 LINK init_grp_ut 00:12:25.817 LINK bdev_raid_ut 00:12:25.817 CC test/unit/lib/iscsi/param.c/param_ut.o 00:12:25.817 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:12:25.817 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:12:25.817 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:12:26.075 LINK jsonrpc_server_ut 00:12:26.075 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:12:26.332 LINK param_ut 00:12:26.332 LINK bdev_ut 00:12:26.332 LINK json_util_ut 00:12:26.332 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:12:26.590 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:12:26.590 LINK json_write_ut 00:12:26.590 LINK raid1_ut 00:12:26.847 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:12:26.847 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:12:26.847 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:12:27.105 LINK portal_grp_ut 00:12:27.105 CC test/unit/lib/log/log.c/log_ut.o 00:12:27.105 LINK bdev_ut 00:12:27.363 LINK json_parse_ut 00:12:27.363 LINK log_ut 00:12:27.621 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:12:27.621 LINK vbdev_zone_block_ut 00:12:27.621 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:12:27.880 LINK tgt_node_ut 00:12:27.880 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:12:27.880 CC test/unit/lib/notify/notify.c/notify_ut.o 00:12:27.880 LINK raid5f_ut 00:12:27.880 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:12:27.880 LINK iscsi_ut 00:12:28.182 LINK notify_ut 00:12:28.182 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:12:28.440 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:12:28.440 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:12:28.440 CC test/unit/lib/sock/sock.c/sock_ut.o 00:12:28.698 LINK dev_ut 00:12:28.956 LINK nvme_ut 00:12:28.956 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:12:29.214 LINK lvol_ut 00:12:29.472 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:12:29.472 LINK blob_ut 00:12:29.730 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:12:29.730 LINK lun_ut 00:12:29.988 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:12:29.988 LINK sock_ut 00:12:30.246 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:12:30.505 CC test/unit/lib/sock/posix.c/posix_ut.o 00:12:30.505 LINK scsi_ut 00:12:30.766 LINK nvme_ctrlr_cmd_ut 00:12:30.766 LINK subsystem_ut 00:12:31.024 LINK ctrlr_ut 00:12:31.024 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:12:31.024 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:12:31.024 LINK ctrlr_bdev_ut 00:12:31.362 LINK bdev_nvme_ut 00:12:31.362 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:12:31.362 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:12:31.362 LINK nvme_ctrlr_ut 00:12:31.362 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:12:31.623 LINK posix_ut 00:12:31.623 LINK ctrlr_discovery_ut 00:12:31.623 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:12:31.884 CC test/unit/lib/thread/thread.c/thread_ut.o 00:12:31.884 LINK tcp_ut 00:12:31.884 LINK scsi_pr_ut 00:12:31.884 CC test/unit/lib/util/base64.c/base64_ut.o 00:12:31.884 LINK scsi_bdev_ut 00:12:32.148 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:12:32.148 LINK nvmf_ut 00:12:32.148 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:12:32.408 LINK base64_ut 00:12:32.408 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:12:32.408 LINK pci_event_ut 00:12:32.666 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:12:32.666 LINK nvme_ctrlr_ocssd_cmd_ut 00:12:32.666 LINK subsystem_ut 00:12:32.666 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:12:32.666 LINK rpc_ut 00:12:32.666 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:12:32.923 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:12:32.923 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:12:32.923 LINK rpc_ut 00:12:33.179 LINK keyring_ut 00:12:33.179 LINK bit_array_ut 00:12:33.179 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:12:33.179 CC test/unit/lib/rdma/common.c/common_ut.o 00:12:33.438 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:12:33.438 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:12:33.438 LINK idxd_user_ut 00:12:33.438 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:12:33.696 LINK cpuset_ut 00:12:33.696 LINK ftl_l2p_ut 00:12:33.696 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:12:33.696 LINK common_ut 00:12:33.696 LINK nvme_ns_ut 00:12:33.954 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:12:33.954 LINK thread_ut 00:12:33.954 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:12:33.954 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:12:33.954 LINK crc16_ut 00:12:34.216 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:12:34.216 LINK crc32_ieee_ut 00:12:34.216 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:12:34.477 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:12:34.477 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:12:34.477 LINK idxd_ut 00:12:34.733 LINK crc64_ut 00:12:34.733 LINK transport_ut 00:12:34.733 LINK crc32c_ut 00:12:34.990 CC test/unit/lib/util/dif.c/dif_ut.o 00:12:34.990 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:12:34.990 CC test/unit/lib/util/iov.c/iov_ut.o 00:12:35.248 LINK rdma_ut 00:12:35.248 LINK iobuf_ut 00:12:35.248 CC test/unit/lib/util/math.c/math_ut.o 00:12:35.248 LINK iov_ut 00:12:35.507 LINK nvme_ns_cmd_ut 00:12:35.507 LINK ftl_band_ut 00:12:35.507 LINK vhost_ut 00:12:35.507 LINK math_ut 00:12:35.765 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:12:35.765 LINK ftl_io_ut 00:12:35.765 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:12:35.765 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:12:35.765 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:12:35.765 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:12:35.765 LINK nvme_ns_ocssd_cmd_ut 00:12:36.044 CC test/unit/lib/util/string.c/string_ut.o 00:12:36.044 CC test/unit/lib/util/xor.c/xor_ut.o 00:12:36.044 LINK ftl_bitmap_ut 00:12:36.044 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:12:36.329 LINK dif_ut 00:12:36.329 LINK pipe_ut 00:12:36.329 LINK ftl_mempool_ut 00:12:36.329 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:12:36.329 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:12:36.329 LINK string_ut 00:12:36.586 LINK xor_ut 00:12:36.586 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:12:36.586 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:12:36.586 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:12:36.586 LINK nvme_poll_group_ut 00:12:36.586 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:12:36.844 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:12:36.844 LINK ftl_mngt_ut 00:12:36.844 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:12:36.844 LINK nvme_quirks_ut 00:12:37.102 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:12:37.361 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:12:37.361 LINK nvme_qpair_ut 00:12:37.619 LINK nvme_pcie_ut 00:12:37.619 LINK nvme_io_msg_ut 00:12:37.877 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:12:37.877 LINK nvme_transport_ut 00:12:37.877 LINK ftl_layout_upgrade_ut 00:12:37.877 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:12:38.135 LINK nvme_fabric_ut 00:12:38.135 LINK ftl_sb_ut 00:12:38.135 LINK nvme_opal_ut 00:12:38.702 LINK nvme_pcie_common_ut 00:12:39.270 LINK nvme_tcp_ut 00:12:39.529 LINK nvme_cuse_ut 00:12:40.095 LINK nvme_rdma_ut 00:12:40.353 ************************************ 00:12:40.353 END TEST unittest_build 00:12:40.353 ************************************ 00:12:40.353 00:12:40.353 real 2m10.500s 00:12:40.353 user 10m44.947s 00:12:40.353 sys 2m34.136s 00:12:40.353 01:42:40 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:40.353 01:42:40 -- common/autotest_common.sh@10 -- $ set +x 00:12:40.353 01:42:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:12:40.353 01:42:40 -- pm/common@30 -- $ signal_monitor_resources TERM 00:12:40.353 01:42:40 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:12:40.353 01:42:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.353 01:42:40 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:12:40.353 01:42:40 -- pm/common@45 -- $ pid=2143 00:12:40.354 01:42:40 -- pm/common@52 -- $ sudo kill -TERM 2143 00:12:40.354 01:42:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.354 01:42:40 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:12:40.354 01:42:40 -- pm/common@45 -- $ pid=2144 00:12:40.354 01:42:40 -- pm/common@52 -- $ sudo kill -TERM 2144 00:12:40.354 01:42:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:40.354 01:42:40 -- nvmf/common.sh@7 -- # uname -s 00:12:40.354 01:42:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.354 01:42:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.354 01:42:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.354 01:42:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.354 01:42:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.354 01:42:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.354 01:42:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.354 01:42:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.354 01:42:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.354 01:42:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.354 01:42:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:24ab8b80-cb72-4ae0-92b2-cd2a67c361c9 00:12:40.354 01:42:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=24ab8b80-cb72-4ae0-92b2-cd2a67c361c9 00:12:40.354 01:42:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.354 01:42:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.354 01:42:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:40.354 01:42:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.354 01:42:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.354 01:42:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.354 01:42:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.354 01:42:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.354 01:42:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:40.354 01:42:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:40.354 01:42:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:40.354 01:42:40 -- paths/export.sh@5 -- # export PATH 00:12:40.354 01:42:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:12:40.354 01:42:40 -- nvmf/common.sh@47 -- # : 0 00:12:40.354 01:42:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.354 01:42:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.354 01:42:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.354 01:42:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.354 01:42:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.354 01:42:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.354 01:42:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.354 01:42:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.354 01:42:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:12:40.354 01:42:40 -- spdk/autotest.sh@32 -- # uname -s 00:12:40.613 01:42:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:12:40.613 01:42:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:12:40.613 01:42:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:40.613 01:42:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:12:40.613 01:42:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:40.613 01:42:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:12:40.613 01:42:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:12:40.613 01:42:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:12:40.613 01:42:40 -- spdk/autotest.sh@48 -- # udevadm_pid=99461 00:12:40.613 01:42:40 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:12:40.613 01:42:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:12:40.613 01:42:40 -- pm/common@17 -- # local monitor 00:12:40.613 01:42:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.613 01:42:40 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99462 00:12:40.613 01:42:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.613 01:42:40 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99465 00:12:40.613 01:42:40 -- pm/common@26 -- # sleep 1 00:12:40.613 01:42:40 -- pm/common@21 -- # date +%s 00:12:40.613 01:42:40 -- pm/common@21 -- # date +%s 00:12:40.613 01:42:40 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713922960 00:12:40.613 01:42:40 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713922960 00:12:40.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713922960_collect-vmstat.pm.log 00:12:40.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713922960_collect-cpu-load.pm.log 00:12:41.557 01:42:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:41.557 01:42:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:41.557 01:42:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:41.557 01:42:41 -- common/autotest_common.sh@10 -- # set +x 00:12:41.557 01:42:41 -- spdk/autotest.sh@59 -- # create_test_list 00:12:41.557 01:42:41 -- common/autotest_common.sh@734 -- # xtrace_disable 00:12:41.557 01:42:41 -- common/autotest_common.sh@10 -- # set +x 00:12:41.557 01:42:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:12:41.557 01:42:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:12:41.557 01:42:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:12:41.557 01:42:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:12:41.557 01:42:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:12:41.557 01:42:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:41.557 01:42:41 -- common/autotest_common.sh@1441 -- # uname 00:12:41.557 01:42:41 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:12:41.557 01:42:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:41.557 01:42:41 -- common/autotest_common.sh@1461 -- # uname 00:12:41.557 01:42:41 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:12:41.557 01:42:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:12:41.557 01:42:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:12:41.557 01:42:41 -- spdk/autotest.sh@72 -- # hash lcov 00:12:41.557 01:42:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:12:41.557 01:42:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:12:41.557 --rc lcov_branch_coverage=1 00:12:41.557 --rc lcov_function_coverage=1 00:12:41.557 --rc genhtml_branch_coverage=1 00:12:41.557 --rc genhtml_function_coverage=1 00:12:41.557 --rc genhtml_legend=1 00:12:41.557 --rc geninfo_all_blocks=1 00:12:41.557 ' 00:12:41.557 01:42:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:12:41.557 --rc lcov_branch_coverage=1 00:12:41.557 --rc lcov_function_coverage=1 00:12:41.557 --rc genhtml_branch_coverage=1 00:12:41.557 --rc genhtml_function_coverage=1 00:12:41.557 --rc genhtml_legend=1 00:12:41.557 --rc geninfo_all_blocks=1 00:12:41.557 ' 00:12:41.557 01:42:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:12:41.557 --rc lcov_branch_coverage=1 00:12:41.557 --rc lcov_function_coverage=1 00:12:41.557 --rc genhtml_branch_coverage=1 00:12:41.557 --rc genhtml_function_coverage=1 00:12:41.557 --rc genhtml_legend=1 00:12:41.557 --rc geninfo_all_blocks=1 00:12:41.557 --no-external' 00:12:41.557 01:42:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:12:41.557 --rc lcov_branch_coverage=1 00:12:41.557 --rc lcov_function_coverage=1 00:12:41.557 --rc genhtml_branch_coverage=1 00:12:41.557 --rc genhtml_function_coverage=1 00:12:41.557 --rc genhtml_legend=1 00:12:41.557 --rc geninfo_all_blocks=1 00:12:41.557 --no-external' 00:12:41.557 01:42:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:12:41.815 lcov: LCOV version 1.15 00:12:41.815 01:42:41 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:12:48.372 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:12:48.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:03.279 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:13:03.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:13:03.279 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:13:03.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:13:03.279 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:13:03.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:13:35.371 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:13:35.371 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:13:35.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:13:35.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:13:35.372 01:43:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:13:35.372 01:43:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:35.372 01:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 01:43:34 -- spdk/autotest.sh@91 -- # rm -f 00:13:35.372 01:43:34 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:35.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:35.372 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:35.372 01:43:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:13:35.372 01:43:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:35.372 01:43:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:35.372 01:43:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:35.372 01:43:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:35.372 01:43:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:35.372 01:43:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:35.372 01:43:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:35.372 01:43:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:35.373 01:43:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:13:35.373 01:43:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:35.373 01:43:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:35.373 01:43:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:13:35.373 01:43:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:13:35.373 01:43:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:35.373 No valid GPT data, bailing 00:13:35.373 01:43:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:35.373 01:43:34 -- scripts/common.sh@391 -- # pt= 00:13:35.373 01:43:34 -- scripts/common.sh@392 -- # return 1 00:13:35.373 01:43:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:35.373 1+0 records in 00:13:35.373 1+0 records out 00:13:35.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051625 s, 203 MB/s 00:13:35.373 01:43:34 -- spdk/autotest.sh@118 -- # sync 00:13:35.373 01:43:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:35.373 01:43:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:35.373 01:43:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:13:36.307 01:43:36 -- spdk/autotest.sh@124 -- # uname -s 00:13:36.307 01:43:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:13:36.307 01:43:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:36.307 01:43:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.307 01:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.307 01:43:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.307 ************************************ 00:13:36.307 START TEST setup.sh 00:13:36.307 ************************************ 00:13:36.307 01:43:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:36.307 * Looking for test storage... 00:13:36.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:36.307 01:43:36 -- setup/test-setup.sh@10 -- # uname -s 00:13:36.307 01:43:36 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:13:36.307 01:43:36 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:36.307 01:43:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.307 01:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.307 01:43:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.307 ************************************ 00:13:36.307 START TEST acl 00:13:36.307 ************************************ 00:13:36.307 01:43:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:36.566 * Looking for test storage... 00:13:36.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:36.566 01:43:36 -- setup/acl.sh@10 -- # get_zoned_devs 00:13:36.566 01:43:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:36.566 01:43:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:36.566 01:43:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:36.566 01:43:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:36.566 01:43:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:36.566 01:43:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:36.566 01:43:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:36.566 01:43:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:36.566 01:43:36 -- setup/acl.sh@12 -- # devs=() 00:13:36.566 01:43:36 -- setup/acl.sh@12 -- # declare -a devs 00:13:36.566 01:43:36 -- setup/acl.sh@13 -- # drivers=() 00:13:36.566 01:43:36 -- setup/acl.sh@13 -- # declare -A drivers 00:13:36.566 01:43:36 -- setup/acl.sh@51 -- # setup reset 00:13:36.566 01:43:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:36.566 01:43:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:37.133 01:43:37 -- setup/acl.sh@52 -- # collect_setup_devs 00:13:37.133 01:43:37 -- setup/acl.sh@16 -- # local dev driver 00:13:37.133 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.133 01:43:37 -- setup/acl.sh@15 -- # setup output status 00:13:37.133 01:43:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:37.133 01:43:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # continue 00:13:37.392 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.392 Hugepages 00:13:37.392 node hugesize free / total 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # continue 00:13:37.392 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.392 00:13:37.392 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:13:37.392 01:43:37 -- setup/acl.sh@19 -- # continue 00:13:37.392 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.651 01:43:37 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:13:37.651 01:43:37 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:13:37.651 01:43:37 -- setup/acl.sh@20 -- # continue 00:13:37.651 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.651 01:43:37 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:13:37.651 01:43:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:13:37.651 01:43:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:37.651 01:43:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:13:37.651 01:43:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:13:37.651 01:43:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:37.651 01:43:37 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:13:37.651 01:43:37 -- setup/acl.sh@54 -- # run_test denied denied 00:13:37.651 01:43:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:37.651 01:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.651 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:37.908 ************************************ 00:13:37.908 START TEST denied 00:13:37.908 ************************************ 00:13:37.908 01:43:37 -- common/autotest_common.sh@1111 -- # denied 00:13:37.908 01:43:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:13:37.908 01:43:37 -- setup/acl.sh@38 -- # setup output config 00:13:37.908 01:43:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:13:37.908 01:43:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:37.908 01:43:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:39.283 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:13:39.283 01:43:39 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:13:39.283 01:43:39 -- setup/acl.sh@28 -- # local dev driver 00:13:39.283 01:43:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:13:39.283 01:43:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:13:39.283 01:43:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:13:39.283 01:43:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:13:39.283 01:43:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:13:39.283 01:43:39 -- setup/acl.sh@41 -- # setup reset 00:13:39.283 01:43:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:39.283 01:43:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:39.848 00:13:39.848 real 0m1.944s 00:13:39.848 user 0m0.540s 00:13:39.848 sys 0m1.458s 00:13:39.848 01:43:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.848 01:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.848 ************************************ 00:13:39.848 END TEST denied 00:13:39.848 ************************************ 00:13:39.848 01:43:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:13:39.848 01:43:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.848 01:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.848 01:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.848 ************************************ 00:13:39.848 START TEST allowed 00:13:39.849 ************************************ 00:13:39.849 01:43:39 -- common/autotest_common.sh@1111 -- # allowed 00:13:39.849 01:43:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:13:39.849 01:43:39 -- setup/acl.sh@45 -- # setup output config 00:13:39.849 01:43:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:39.849 01:43:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:39.849 01:43:39 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:13:41.750 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.750 01:43:41 -- setup/acl.sh@47 -- # verify 00:13:41.750 01:43:41 -- setup/acl.sh@28 -- # local dev driver 00:13:41.750 01:43:41 -- setup/acl.sh@48 -- # setup reset 00:13:41.750 01:43:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:41.750 01:43:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:42.008 00:13:42.008 real 0m2.056s 00:13:42.008 user 0m0.431s 00:13:42.008 sys 0m1.617s 00:13:42.008 01:43:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:42.008 01:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.008 ************************************ 00:13:42.008 END TEST allowed 00:13:42.008 ************************************ 00:13:42.008 ************************************ 00:13:42.008 END TEST acl 00:13:42.008 ************************************ 00:13:42.008 00:13:42.008 real 0m5.532s 00:13:42.008 user 0m1.754s 00:13:42.008 sys 0m3.906s 00:13:42.008 01:43:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:42.008 01:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.008 01:43:41 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:42.008 01:43:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:42.008 01:43:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.008 01:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.008 ************************************ 00:13:42.008 START TEST hugepages 00:13:42.009 ************************************ 00:13:42.009 01:43:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:42.268 * Looking for test storage... 00:13:42.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:42.268 01:43:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:13:42.268 01:43:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:13:42.268 01:43:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:13:42.268 01:43:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:13:42.268 01:43:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:13:42.268 01:43:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:13:42.268 01:43:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:13:42.268 01:43:42 -- setup/common.sh@18 -- # local node= 00:13:42.268 01:43:42 -- setup/common.sh@19 -- # local var val 00:13:42.268 01:43:42 -- setup/common.sh@20 -- # local mem_f mem 00:13:42.268 01:43:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:42.268 01:43:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:42.268 01:43:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:42.268 01:43:42 -- setup/common.sh@28 -- # mapfile -t mem 00:13:42.268 01:43:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 2877452 kB' 'MemAvailable: 7391720 kB' 'Buffers: 35984 kB' 'Cached: 4614648 kB' 'SwapCached: 0 kB' 'Active: 1012644 kB' 'Inactive: 3762124 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 134756 kB' 'Active(file): 1011600 kB' 'Inactive(file): 3627368 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 528 kB' 'Writeback: 0 kB' 'AnonPages: 153396 kB' 'Mapped: 67988 kB' 'Shmem: 2600 kB' 'KReclaimable: 196816 kB' 'Slab: 261952 kB' 'SReclaimable: 196816 kB' 'SUnreclaim: 65136 kB' 'KernelStack: 4568 kB' 'PageTables: 3884 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024328 kB' 'Committed_AS: 509964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.268 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.268 01:43:42 -- setup/common.sh@32 -- # continue 00:13:42.269 01:43:42 -- setup/common.sh@31 -- # IFS=': ' 00:13:42.269 01:43:42 -- setup/common.sh@31 -- # read -r var val _ 00:13:42.269 01:43:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:42.269 01:43:42 -- setup/common.sh@33 -- # echo 2048 00:13:42.269 01:43:42 -- setup/common.sh@33 -- # return 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:13:42.269 01:43:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:13:42.269 01:43:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:13:42.269 01:43:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:13:42.269 01:43:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:13:42.269 01:43:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:13:42.269 01:43:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:13:42.269 01:43:42 -- setup/hugepages.sh@207 -- # get_nodes 00:13:42.269 01:43:42 -- setup/hugepages.sh@27 -- # local node 00:13:42.269 01:43:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:42.269 01:43:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:13:42.269 01:43:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:42.269 01:43:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:42.269 01:43:42 -- setup/hugepages.sh@208 -- # clear_hp 00:13:42.269 01:43:42 -- setup/hugepages.sh@37 -- # local node hp 00:13:42.269 01:43:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:42.269 01:43:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:42.269 01:43:42 -- setup/hugepages.sh@41 -- # echo 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:42.269 01:43:42 -- setup/hugepages.sh@41 -- # echo 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:42.269 01:43:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:42.269 01:43:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:13:42.269 01:43:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:42.269 01:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.269 01:43:42 -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 ************************************ 00:13:42.269 START TEST default_setup 00:13:42.269 ************************************ 00:13:42.269 01:43:42 -- common/autotest_common.sh@1111 -- # default_setup 00:13:42.269 01:43:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:42.269 01:43:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:42.269 01:43:42 -- setup/hugepages.sh@51 -- # shift 00:13:42.269 01:43:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:42.269 01:43:42 -- setup/hugepages.sh@52 -- # local node_ids 00:13:42.269 01:43:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:42.269 01:43:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:42.269 01:43:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:42.269 01:43:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:42.269 01:43:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:42.269 01:43:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:42.269 01:43:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:42.269 01:43:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:42.269 01:43:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:42.269 01:43:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:42.269 01:43:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:42.269 01:43:42 -- setup/hugepages.sh@73 -- # return 0 00:13:42.269 01:43:42 -- setup/hugepages.sh@137 -- # setup output 00:13:42.269 01:43:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:42.269 01:43:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:42.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:42.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.327 01:43:43 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:13:43.327 01:43:43 -- setup/hugepages.sh@89 -- # local node 00:13:43.327 01:43:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:43.327 01:43:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:43.327 01:43:43 -- setup/hugepages.sh@92 -- # local surp 00:13:43.327 01:43:43 -- setup/hugepages.sh@93 -- # local resv 00:13:43.327 01:43:43 -- setup/hugepages.sh@94 -- # local anon 00:13:43.327 01:43:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:43.327 01:43:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:43.327 01:43:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:43.327 01:43:43 -- setup/common.sh@18 -- # local node= 00:13:43.327 01:43:43 -- setup/common.sh@19 -- # local var val 00:13:43.327 01:43:43 -- setup/common.sh@20 -- # local mem_f mem 00:13:43.327 01:43:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:43.327 01:43:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:43.327 01:43:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:43.327 01:43:43 -- setup/common.sh@28 -- # mapfile -t mem 00:13:43.327 01:43:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:43.327 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964764 kB' 'MemAvailable: 9479100 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012740 kB' 'Inactive: 3773388 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145968 kB' 'Active(file): 1011684 kB' 'Inactive(file): 3627420 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'AnonPages: 164568 kB' 'Mapped: 68028 kB' 'Shmem: 2596 kB' 'KReclaimable: 196748 kB' 'Slab: 261920 kB' 'SReclaimable: 196748 kB' 'SUnreclaim: 65172 kB' 'KernelStack: 4420 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.328 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.328 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:43.329 01:43:43 -- setup/common.sh@33 -- # echo 0 00:13:43.329 01:43:43 -- setup/common.sh@33 -- # return 0 00:13:43.329 01:43:43 -- setup/hugepages.sh@97 -- # anon=0 00:13:43.329 01:43:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:43.329 01:43:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:43.329 01:43:43 -- setup/common.sh@18 -- # local node= 00:13:43.329 01:43:43 -- setup/common.sh@19 -- # local var val 00:13:43.329 01:43:43 -- setup/common.sh@20 -- # local mem_f mem 00:13:43.329 01:43:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:43.329 01:43:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:43.329 01:43:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:43.329 01:43:43 -- setup/common.sh@28 -- # mapfile -t mem 00:13:43.329 01:43:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964764 kB' 'MemAvailable: 9479100 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012740 kB' 'Inactive: 3773648 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146228 kB' 'Active(file): 1011684 kB' 'Inactive(file): 3627420 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'AnonPages: 164828 kB' 'Mapped: 68028 kB' 'Shmem: 2596 kB' 'KReclaimable: 196748 kB' 'Slab: 261920 kB' 'SReclaimable: 196748 kB' 'SUnreclaim: 65172 kB' 'KernelStack: 4420 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.329 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.329 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.330 01:43:43 -- setup/common.sh@33 -- # echo 0 00:13:43.330 01:43:43 -- setup/common.sh@33 -- # return 0 00:13:43.330 01:43:43 -- setup/hugepages.sh@99 -- # surp=0 00:13:43.330 01:43:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:43.330 01:43:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:43.330 01:43:43 -- setup/common.sh@18 -- # local node= 00:13:43.330 01:43:43 -- setup/common.sh@19 -- # local var val 00:13:43.330 01:43:43 -- setup/common.sh@20 -- # local mem_f mem 00:13:43.330 01:43:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:43.330 01:43:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:43.330 01:43:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:43.330 01:43:43 -- setup/common.sh@28 -- # mapfile -t mem 00:13:43.330 01:43:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:43.330 01:43:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964764 kB' 'MemAvailable: 9479100 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012740 kB' 'Inactive: 3773468 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146048 kB' 'Active(file): 1011684 kB' 'Inactive(file): 3627420 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'AnonPages: 164644 kB' 'Mapped: 68028 kB' 'Shmem: 2596 kB' 'KReclaimable: 196748 kB' 'Slab: 261920 kB' 'SReclaimable: 196748 kB' 'SUnreclaim: 65172 kB' 'KernelStack: 4404 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.330 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.330 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.331 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.331 01:43:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.332 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.332 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.332 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.332 01:43:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.332 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.332 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.589 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.589 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.589 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.589 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.589 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.589 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.589 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.589 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.589 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.589 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:43.589 01:43:43 -- setup/common.sh@33 -- # echo 0 00:13:43.589 01:43:43 -- setup/common.sh@33 -- # return 0 00:13:43.589 01:43:43 -- setup/hugepages.sh@100 -- # resv=0 00:13:43.589 nr_hugepages=1024 00:13:43.589 01:43:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:43.589 resv_hugepages=0 00:13:43.589 01:43:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:43.589 surplus_hugepages=0 00:13:43.589 01:43:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:43.590 anon_hugepages=0 00:13:43.590 01:43:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:43.590 01:43:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:43.590 01:43:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:43.590 01:43:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:43.590 01:43:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:43.590 01:43:43 -- setup/common.sh@18 -- # local node= 00:13:43.590 01:43:43 -- setup/common.sh@19 -- # local var val 00:13:43.590 01:43:43 -- setup/common.sh@20 -- # local mem_f mem 00:13:43.590 01:43:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:43.590 01:43:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:43.590 01:43:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:43.590 01:43:43 -- setup/common.sh@28 -- # mapfile -t mem 00:13:43.590 01:43:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4965016 kB' 'MemAvailable: 9479356 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012732 kB' 'Inactive: 3773548 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146124 kB' 'Active(file): 1011684 kB' 'Inactive(file): 3627424 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'AnonPages: 164712 kB' 'Mapped: 68008 kB' 'Shmem: 2596 kB' 'KReclaimable: 196748 kB' 'Slab: 261960 kB' 'SReclaimable: 196748 kB' 'SUnreclaim: 65212 kB' 'KernelStack: 4404 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.590 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.590 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:43.591 01:43:43 -- setup/common.sh@33 -- # echo 1024 00:13:43.591 01:43:43 -- setup/common.sh@33 -- # return 0 00:13:43.591 01:43:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:43.591 01:43:43 -- setup/hugepages.sh@112 -- # get_nodes 00:13:43.591 01:43:43 -- setup/hugepages.sh@27 -- # local node 00:13:43.591 01:43:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:43.591 01:43:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:43.591 01:43:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:43.591 01:43:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:43.591 01:43:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:43.591 01:43:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:43.591 01:43:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:43.591 01:43:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:43.591 01:43:43 -- setup/common.sh@18 -- # local node=0 00:13:43.591 01:43:43 -- setup/common.sh@19 -- # local var val 00:13:43.591 01:43:43 -- setup/common.sh@20 -- # local mem_f mem 00:13:43.591 01:43:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:43.591 01:43:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:43.591 01:43:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:43.591 01:43:43 -- setup/common.sh@28 -- # mapfile -t mem 00:13:43.591 01:43:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:43.591 01:43:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4965288 kB' 'MemUsed: 7277676 kB' 'SwapCached: 0 kB' 'Active: 1012732 kB' 'Inactive: 3773472 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146048 kB' 'Active(file): 1011684 kB' 'Inactive(file): 3627424 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'FilePages: 4650768 kB' 'Mapped: 68008 kB' 'AnonPages: 164636 kB' 'Shmem: 2596 kB' 'KernelStack: 4456 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196748 kB' 'Slab: 261960 kB' 'SReclaimable: 196748 kB' 'SUnreclaim: 65212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.591 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.591 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # continue 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # IFS=': ' 00:13:43.592 01:43:43 -- setup/common.sh@31 -- # read -r var val _ 00:13:43.592 01:43:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:43.592 01:43:43 -- setup/common.sh@33 -- # echo 0 00:13:43.592 01:43:43 -- setup/common.sh@33 -- # return 0 00:13:43.592 01:43:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:43.592 01:43:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:43.592 01:43:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:43.592 01:43:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:43.592 01:43:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:43.592 node0=1024 expecting 1024 00:13:43.592 01:43:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:43.592 00:13:43.592 real 0m1.264s 00:13:43.592 user 0m0.325s 00:13:43.592 sys 0m0.927s 00:13:43.592 01:43:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.592 01:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:43.592 ************************************ 00:13:43.592 END TEST default_setup 00:13:43.592 ************************************ 00:13:43.592 01:43:43 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:13:43.592 01:43:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.592 01:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.592 01:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:43.592 ************************************ 00:13:43.592 START TEST per_node_1G_alloc 00:13:43.592 ************************************ 00:13:43.592 01:43:43 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:13:43.592 01:43:43 -- setup/hugepages.sh@143 -- # local IFS=, 00:13:43.592 01:43:43 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:13:43.592 01:43:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:13:43.592 01:43:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:43.592 01:43:43 -- setup/hugepages.sh@51 -- # shift 00:13:43.592 01:43:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:43.592 01:43:43 -- setup/hugepages.sh@52 -- # local node_ids 00:13:43.592 01:43:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:43.592 01:43:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:43.592 01:43:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:43.592 01:43:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:43.592 01:43:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:43.592 01:43:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:43.592 01:43:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:43.592 01:43:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:43.592 01:43:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:43.592 01:43:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:43.592 01:43:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:43.592 01:43:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:13:43.592 01:43:43 -- setup/hugepages.sh@73 -- # return 0 00:13:43.592 01:43:43 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:13:43.592 01:43:43 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:13:43.592 01:43:43 -- setup/hugepages.sh@146 -- # setup output 00:13:43.592 01:43:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:43.593 01:43:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:43.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:44.107 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:44.368 01:43:44 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:13:44.368 01:43:44 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:13:44.368 01:43:44 -- setup/hugepages.sh@89 -- # local node 00:13:44.368 01:43:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:44.368 01:43:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:44.368 01:43:44 -- setup/hugepages.sh@92 -- # local surp 00:13:44.368 01:43:44 -- setup/hugepages.sh@93 -- # local resv 00:13:44.368 01:43:44 -- setup/hugepages.sh@94 -- # local anon 00:13:44.368 01:43:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:44.368 01:43:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:44.368 01:43:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:44.368 01:43:44 -- setup/common.sh@18 -- # local node= 00:13:44.368 01:43:44 -- setup/common.sh@19 -- # local var val 00:13:44.368 01:43:44 -- setup/common.sh@20 -- # local mem_f mem 00:13:44.368 01:43:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:44.368 01:43:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:44.368 01:43:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:44.368 01:43:44 -- setup/common.sh@28 -- # mapfile -t mem 00:13:44.368 01:43:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:44.368 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.368 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.368 01:43:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6011652 kB' 'MemAvailable: 10526008 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012752 kB' 'Inactive: 3773796 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 146380 kB' 'Active(file): 1011692 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 164876 kB' 'Mapped: 67968 kB' 'Shmem: 2596 kB' 'KReclaimable: 196764 kB' 'Slab: 262068 kB' 'SReclaimable: 196764 kB' 'SUnreclaim: 65304 kB' 'KernelStack: 4444 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.369 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.369 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:44.370 01:43:44 -- setup/common.sh@33 -- # echo 0 00:13:44.370 01:43:44 -- setup/common.sh@33 -- # return 0 00:13:44.370 01:43:44 -- setup/hugepages.sh@97 -- # anon=0 00:13:44.370 01:43:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:44.370 01:43:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:44.370 01:43:44 -- setup/common.sh@18 -- # local node= 00:13:44.370 01:43:44 -- setup/common.sh@19 -- # local var val 00:13:44.370 01:43:44 -- setup/common.sh@20 -- # local mem_f mem 00:13:44.370 01:43:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:44.370 01:43:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:44.370 01:43:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:44.370 01:43:44 -- setup/common.sh@28 -- # mapfile -t mem 00:13:44.370 01:43:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6011912 kB' 'MemAvailable: 10526268 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012748 kB' 'Inactive: 3773572 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146156 kB' 'Active(file): 1011692 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164784 kB' 'Mapped: 68008 kB' 'Shmem: 2596 kB' 'KReclaimable: 196764 kB' 'Slab: 262196 kB' 'SReclaimable: 196764 kB' 'SUnreclaim: 65432 kB' 'KernelStack: 4416 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.370 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.370 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.371 01:43:44 -- setup/common.sh@33 -- # echo 0 00:13:44.371 01:43:44 -- setup/common.sh@33 -- # return 0 00:13:44.371 01:43:44 -- setup/hugepages.sh@99 -- # surp=0 00:13:44.371 01:43:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:44.371 01:43:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:44.371 01:43:44 -- setup/common.sh@18 -- # local node= 00:13:44.371 01:43:44 -- setup/common.sh@19 -- # local var val 00:13:44.371 01:43:44 -- setup/common.sh@20 -- # local mem_f mem 00:13:44.371 01:43:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:44.371 01:43:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:44.371 01:43:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:44.371 01:43:44 -- setup/common.sh@28 -- # mapfile -t mem 00:13:44.371 01:43:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:44.371 01:43:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6012380 kB' 'MemAvailable: 10526736 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012748 kB' 'Inactive: 3773644 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146228 kB' 'Active(file): 1011692 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164584 kB' 'Mapped: 68008 kB' 'Shmem: 2596 kB' 'KReclaimable: 196764 kB' 'Slab: 261916 kB' 'SReclaimable: 196764 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4424 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.371 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.371 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.372 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.372 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:44.373 01:43:44 -- setup/common.sh@33 -- # echo 0 00:13:44.373 01:43:44 -- setup/common.sh@33 -- # return 0 00:13:44.373 01:43:44 -- setup/hugepages.sh@100 -- # resv=0 00:13:44.373 01:43:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:44.373 nr_hugepages=512 00:13:44.373 resv_hugepages=0 00:13:44.373 01:43:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:44.373 surplus_hugepages=0 00:13:44.373 01:43:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:44.373 anon_hugepages=0 00:13:44.373 01:43:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:44.373 01:43:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:44.373 01:43:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:44.373 01:43:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:44.373 01:43:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:44.373 01:43:44 -- setup/common.sh@18 -- # local node= 00:13:44.373 01:43:44 -- setup/common.sh@19 -- # local var val 00:13:44.373 01:43:44 -- setup/common.sh@20 -- # local mem_f mem 00:13:44.373 01:43:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:44.373 01:43:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:44.373 01:43:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:44.373 01:43:44 -- setup/common.sh@28 -- # mapfile -t mem 00:13:44.373 01:43:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6012380 kB' 'MemAvailable: 10526736 kB' 'Buffers: 35984 kB' 'Cached: 4614784 kB' 'SwapCached: 0 kB' 'Active: 1012748 kB' 'Inactive: 3773364 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145948 kB' 'Active(file): 1011692 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164556 kB' 'Mapped: 68048 kB' 'Shmem: 2596 kB' 'KReclaimable: 196764 kB' 'Slab: 261916 kB' 'SReclaimable: 196764 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4376 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.373 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.373 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:44.374 01:43:44 -- setup/common.sh@33 -- # echo 512 00:13:44.374 01:43:44 -- setup/common.sh@33 -- # return 0 00:13:44.374 01:43:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:44.374 01:43:44 -- setup/hugepages.sh@112 -- # get_nodes 00:13:44.374 01:43:44 -- setup/hugepages.sh@27 -- # local node 00:13:44.374 01:43:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:44.374 01:43:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:44.374 01:43:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:44.374 01:43:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:44.374 01:43:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:44.374 01:43:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:44.374 01:43:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:44.374 01:43:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:44.374 01:43:44 -- setup/common.sh@18 -- # local node=0 00:13:44.374 01:43:44 -- setup/common.sh@19 -- # local var val 00:13:44.374 01:43:44 -- setup/common.sh@20 -- # local mem_f mem 00:13:44.374 01:43:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:44.374 01:43:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:44.374 01:43:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:44.374 01:43:44 -- setup/common.sh@28 -- # mapfile -t mem 00:13:44.374 01:43:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.374 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.374 01:43:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6012380 kB' 'MemUsed: 6230584 kB' 'SwapCached: 0 kB' 'Active: 1012748 kB' 'Inactive: 3773208 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145792 kB' 'Active(file): 1011692 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4650768 kB' 'Mapped: 68048 kB' 'AnonPages: 164396 kB' 'Shmem: 2596 kB' 'KernelStack: 4412 kB' 'PageTables: 3404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196764 kB' 'Slab: 261916 kB' 'SReclaimable: 196764 kB' 'SUnreclaim: 65152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:44.374 01:43:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # continue 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # IFS=': ' 00:13:44.375 01:43:44 -- setup/common.sh@31 -- # read -r var val _ 00:13:44.375 01:43:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:44.375 01:43:44 -- setup/common.sh@33 -- # echo 0 00:13:44.375 01:43:44 -- setup/common.sh@33 -- # return 0 00:13:44.375 01:43:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:44.375 01:43:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:44.375 01:43:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:44.375 01:43:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:44.375 node0=512 expecting 512 00:13:44.376 01:43:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:44.376 01:43:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:44.376 00:13:44.376 real 0m0.789s 00:13:44.376 user 0m0.317s 00:13:44.376 sys 0m0.526s 00:13:44.376 01:43:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:44.376 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:44.376 ************************************ 00:13:44.376 END TEST per_node_1G_alloc 00:13:44.376 ************************************ 00:13:44.376 01:43:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:13:44.376 01:43:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.376 01:43:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.376 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:44.635 ************************************ 00:13:44.635 START TEST even_2G_alloc 00:13:44.635 ************************************ 00:13:44.635 01:43:44 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:13:44.635 01:43:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:13:44.635 01:43:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:44.635 01:43:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:44.635 01:43:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:44.635 01:43:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:44.635 01:43:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:44.635 01:43:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:44.635 01:43:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:44.635 01:43:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:44.635 01:43:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:44.635 01:43:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:13:44.635 01:43:44 -- setup/hugepages.sh@83 -- # : 0 00:13:44.635 01:43:44 -- setup/hugepages.sh@84 -- # : 0 00:13:44.635 01:43:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:44.635 01:43:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:13:44.635 01:43:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:13:44.635 01:43:44 -- setup/hugepages.sh@153 -- # setup output 00:13:44.635 01:43:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:44.635 01:43:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:44.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:44.893 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.464 01:43:45 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:13:45.464 01:43:45 -- setup/hugepages.sh@89 -- # local node 00:13:45.464 01:43:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:45.464 01:43:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:45.464 01:43:45 -- setup/hugepages.sh@92 -- # local surp 00:13:45.464 01:43:45 -- setup/hugepages.sh@93 -- # local resv 00:13:45.464 01:43:45 -- setup/hugepages.sh@94 -- # local anon 00:13:45.464 01:43:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:45.464 01:43:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:45.464 01:43:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:45.464 01:43:45 -- setup/common.sh@18 -- # local node= 00:13:45.464 01:43:45 -- setup/common.sh@19 -- # local var val 00:13:45.464 01:43:45 -- setup/common.sh@20 -- # local mem_f mem 00:13:45.464 01:43:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:45.464 01:43:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:45.464 01:43:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:45.464 01:43:45 -- setup/common.sh@28 -- # mapfile -t mem 00:13:45.464 01:43:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:45.464 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.464 01:43:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964768 kB' 'MemAvailable: 9479120 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012752 kB' 'Inactive: 3773492 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146076 kB' 'Active(file): 1011696 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'AnonPages: 164672 kB' 'Mapped: 68016 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4392 kB' 'PageTables: 3500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:45.464 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.464 01:43:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.464 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.464 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.464 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.464 01:43:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.465 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.465 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:45.466 01:43:45 -- setup/common.sh@33 -- # echo 0 00:13:45.466 01:43:45 -- setup/common.sh@33 -- # return 0 00:13:45.466 01:43:45 -- setup/hugepages.sh@97 -- # anon=0 00:13:45.466 01:43:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:45.466 01:43:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:45.466 01:43:45 -- setup/common.sh@18 -- # local node= 00:13:45.466 01:43:45 -- setup/common.sh@19 -- # local var val 00:13:45.466 01:43:45 -- setup/common.sh@20 -- # local mem_f mem 00:13:45.466 01:43:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:45.466 01:43:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:45.466 01:43:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:45.466 01:43:45 -- setup/common.sh@28 -- # mapfile -t mem 00:13:45.466 01:43:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964768 kB' 'MemAvailable: 9479120 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012752 kB' 'Inactive: 3773220 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145804 kB' 'Active(file): 1011696 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'AnonPages: 164380 kB' 'Mapped: 68016 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4376 kB' 'PageTables: 3464 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.466 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.466 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.467 01:43:45 -- setup/common.sh@33 -- # echo 0 00:13:45.467 01:43:45 -- setup/common.sh@33 -- # return 0 00:13:45.467 01:43:45 -- setup/hugepages.sh@99 -- # surp=0 00:13:45.467 01:43:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:45.467 01:43:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:45.467 01:43:45 -- setup/common.sh@18 -- # local node= 00:13:45.467 01:43:45 -- setup/common.sh@19 -- # local var val 00:13:45.467 01:43:45 -- setup/common.sh@20 -- # local mem_f mem 00:13:45.467 01:43:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:45.467 01:43:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:45.467 01:43:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:45.467 01:43:45 -- setup/common.sh@28 -- # mapfile -t mem 00:13:45.467 01:43:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:45.467 01:43:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4964768 kB' 'MemAvailable: 9479128 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3773012 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145596 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 164420 kB' 'Mapped: 68016 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4328 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.467 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.467 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.468 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.468 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:45.469 01:43:45 -- setup/common.sh@33 -- # echo 0 00:13:45.469 01:43:45 -- setup/common.sh@33 -- # return 0 00:13:45.469 01:43:45 -- setup/hugepages.sh@100 -- # resv=0 00:13:45.469 nr_hugepages=1024 00:13:45.469 01:43:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:45.469 resv_hugepages=0 00:13:45.469 01:43:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:45.469 surplus_hugepages=0 00:13:45.469 01:43:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:45.469 anon_hugepages=0 00:13:45.469 01:43:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:45.469 01:43:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:45.469 01:43:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:45.469 01:43:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:45.469 01:43:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:45.469 01:43:45 -- setup/common.sh@18 -- # local node= 00:13:45.469 01:43:45 -- setup/common.sh@19 -- # local var val 00:13:45.469 01:43:45 -- setup/common.sh@20 -- # local mem_f mem 00:13:45.469 01:43:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:45.469 01:43:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:45.469 01:43:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:45.469 01:43:45 -- setup/common.sh@28 -- # mapfile -t mem 00:13:45.469 01:43:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4965020 kB' 'MemAvailable: 9479380 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3773360 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145944 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 164544 kB' 'Mapped: 68016 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4376 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 520944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.469 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.469 01:43:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:45.470 01:43:45 -- setup/common.sh@33 -- # echo 1024 00:13:45.470 01:43:45 -- setup/common.sh@33 -- # return 0 00:13:45.470 01:43:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:45.470 01:43:45 -- setup/hugepages.sh@112 -- # get_nodes 00:13:45.470 01:43:45 -- setup/hugepages.sh@27 -- # local node 00:13:45.470 01:43:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:45.470 01:43:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:45.470 01:43:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:45.470 01:43:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:45.470 01:43:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:45.470 01:43:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:45.470 01:43:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:45.470 01:43:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:45.470 01:43:45 -- setup/common.sh@18 -- # local node=0 00:13:45.470 01:43:45 -- setup/common.sh@19 -- # local var val 00:13:45.470 01:43:45 -- setup/common.sh@20 -- # local mem_f mem 00:13:45.470 01:43:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:45.470 01:43:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:45.470 01:43:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:45.470 01:43:45 -- setup/common.sh@28 -- # mapfile -t mem 00:13:45.470 01:43:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:45.470 01:43:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4965272 kB' 'MemUsed: 7277692 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3773176 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145760 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'FilePages: 4650780 kB' 'Mapped: 68016 kB' 'AnonPages: 164620 kB' 'Shmem: 2596 kB' 'KernelStack: 4412 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.470 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.470 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # continue 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # IFS=': ' 00:13:45.471 01:43:45 -- setup/common.sh@31 -- # read -r var val _ 00:13:45.471 01:43:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:45.471 01:43:45 -- setup/common.sh@33 -- # echo 0 00:13:45.471 01:43:45 -- setup/common.sh@33 -- # return 0 00:13:45.471 01:43:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:45.471 01:43:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:45.471 01:43:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:45.471 01:43:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:45.471 node0=1024 expecting 1024 00:13:45.471 01:43:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:45.471 01:43:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:45.471 00:13:45.471 real 0m1.018s 00:13:45.471 user 0m0.371s 00:13:45.471 sys 0m0.688s 00:13:45.471 01:43:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.471 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:45.471 ************************************ 00:13:45.472 END TEST even_2G_alloc 00:13:45.472 ************************************ 00:13:45.472 01:43:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:13:45.472 01:43:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:45.472 01:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.472 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:45.731 ************************************ 00:13:45.731 START TEST odd_alloc 00:13:45.731 ************************************ 00:13:45.731 01:43:45 -- common/autotest_common.sh@1111 -- # odd_alloc 00:13:45.731 01:43:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:13:45.731 01:43:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:13:45.731 01:43:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:13:45.731 01:43:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:45.731 01:43:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:45.731 01:43:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:45.731 01:43:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:13:45.731 01:43:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:45.731 01:43:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:45.731 01:43:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:45.731 01:43:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:13:45.731 01:43:45 -- setup/hugepages.sh@83 -- # : 0 00:13:45.731 01:43:45 -- setup/hugepages.sh@84 -- # : 0 00:13:45.731 01:43:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:45.731 01:43:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:13:45.731 01:43:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:13:45.731 01:43:45 -- setup/hugepages.sh@160 -- # setup output 00:13:45.731 01:43:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:45.731 01:43:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:45.990 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:46.559 01:43:46 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:13:46.559 01:43:46 -- setup/hugepages.sh@89 -- # local node 00:13:46.559 01:43:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:46.559 01:43:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:46.559 01:43:46 -- setup/hugepages.sh@92 -- # local surp 00:13:46.559 01:43:46 -- setup/hugepages.sh@93 -- # local resv 00:13:46.559 01:43:46 -- setup/hugepages.sh@94 -- # local anon 00:13:46.559 01:43:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:46.559 01:43:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:46.559 01:43:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:46.559 01:43:46 -- setup/common.sh@18 -- # local node= 00:13:46.559 01:43:46 -- setup/common.sh@19 -- # local var val 00:13:46.559 01:43:46 -- setup/common.sh@20 -- # local mem_f mem 00:13:46.559 01:43:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:46.559 01:43:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:46.559 01:43:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:46.559 01:43:46 -- setup/common.sh@28 -- # mapfile -t mem 00:13:46.559 01:43:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4968332 kB' 'MemAvailable: 9482692 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3769604 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142188 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 160784 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261908 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4328 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.559 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.559 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:46.560 01:43:46 -- setup/common.sh@33 -- # echo 0 00:13:46.560 01:43:46 -- setup/common.sh@33 -- # return 0 00:13:46.560 01:43:46 -- setup/hugepages.sh@97 -- # anon=0 00:13:46.560 01:43:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:46.560 01:43:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:46.560 01:43:46 -- setup/common.sh@18 -- # local node= 00:13:46.560 01:43:46 -- setup/common.sh@19 -- # local var val 00:13:46.560 01:43:46 -- setup/common.sh@20 -- # local mem_f mem 00:13:46.560 01:43:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:46.560 01:43:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:46.560 01:43:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:46.560 01:43:46 -- setup/common.sh@28 -- # mapfile -t mem 00:13:46.560 01:43:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4968332 kB' 'MemAvailable: 9482692 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3769664 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142248 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 160860 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261908 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4360 kB' 'PageTables: 3508 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.560 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.560 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.561 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.561 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.562 01:43:46 -- setup/common.sh@33 -- # echo 0 00:13:46.562 01:43:46 -- setup/common.sh@33 -- # return 0 00:13:46.562 01:43:46 -- setup/hugepages.sh@99 -- # surp=0 00:13:46.562 01:43:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:46.562 01:43:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:46.562 01:43:46 -- setup/common.sh@18 -- # local node= 00:13:46.562 01:43:46 -- setup/common.sh@19 -- # local var val 00:13:46.562 01:43:46 -- setup/common.sh@20 -- # local mem_f mem 00:13:46.562 01:43:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:46.562 01:43:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:46.562 01:43:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:46.562 01:43:46 -- setup/common.sh@28 -- # mapfile -t mem 00:13:46.562 01:43:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4968332 kB' 'MemAvailable: 9482692 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3769632 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142216 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 160828 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261908 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4344 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.562 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.562 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.563 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.563 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:46.563 01:43:46 -- setup/common.sh@33 -- # echo 0 00:13:46.563 01:43:46 -- setup/common.sh@33 -- # return 0 00:13:46.563 01:43:46 -- setup/hugepages.sh@100 -- # resv=0 00:13:46.563 nr_hugepages=1025 00:13:46.563 01:43:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:13:46.563 resv_hugepages=0 00:13:46.563 01:43:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:46.563 surplus_hugepages=0 00:13:46.563 01:43:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:46.563 anon_hugepages=0 00:13:46.563 01:43:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:46.563 01:43:46 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:46.563 01:43:46 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:13:46.563 01:43:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:46.563 01:43:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:46.564 01:43:46 -- setup/common.sh@18 -- # local node= 00:13:46.564 01:43:46 -- setup/common.sh@19 -- # local var val 00:13:46.564 01:43:46 -- setup/common.sh@20 -- # local mem_f mem 00:13:46.564 01:43:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:46.564 01:43:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:46.564 01:43:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:46.564 01:43:46 -- setup/common.sh@28 -- # mapfile -t mem 00:13:46.564 01:43:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4968332 kB' 'MemAvailable: 9482692 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3769312 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141896 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 160508 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261908 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65152 kB' 'KernelStack: 4380 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.564 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.564 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.565 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.565 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:46.565 01:43:46 -- setup/common.sh@33 -- # echo 1025 00:13:46.565 01:43:46 -- setup/common.sh@33 -- # return 0 00:13:46.565 01:43:46 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:46.565 01:43:46 -- setup/hugepages.sh@112 -- # get_nodes 00:13:46.565 01:43:46 -- setup/hugepages.sh@27 -- # local node 00:13:46.565 01:43:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:46.565 01:43:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:13:46.565 01:43:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:46.565 01:43:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:46.565 01:43:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:46.565 01:43:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:46.565 01:43:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:46.565 01:43:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:46.565 01:43:46 -- setup/common.sh@18 -- # local node=0 00:13:46.565 01:43:46 -- setup/common.sh@19 -- # local var val 00:13:46.565 01:43:46 -- setup/common.sh@20 -- # local mem_f mem 00:13:46.565 01:43:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:46.565 01:43:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:46.565 01:43:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:46.566 01:43:46 -- setup/common.sh@28 -- # mapfile -t mem 00:13:46.566 01:43:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4968332 kB' 'MemUsed: 7274632 kB' 'SwapCached: 0 kB' 'Active: 1012752 kB' 'Inactive: 3769348 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141932 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627416 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 4650780 kB' 'Mapped: 67168 kB' 'AnonPages: 160528 kB' 'Shmem: 2596 kB' 'KernelStack: 4312 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196756 kB' 'Slab: 261900 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.566 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.566 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.567 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.567 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.567 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.567 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.567 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.567 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.567 01:43:46 -- setup/common.sh@32 -- # continue 00:13:46.567 01:43:46 -- setup/common.sh@31 -- # IFS=': ' 00:13:46.567 01:43:46 -- setup/common.sh@31 -- # read -r var val _ 00:13:46.567 01:43:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:46.567 01:43:46 -- setup/common.sh@33 -- # echo 0 00:13:46.567 01:43:46 -- setup/common.sh@33 -- # return 0 00:13:46.567 01:43:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:46.567 01:43:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:46.567 01:43:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:46.567 01:43:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:46.567 01:43:46 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:13:46.567 node0=1025 expecting 1025 00:13:46.567 01:43:46 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:13:46.567 00:13:46.567 real 0m1.051s 00:13:46.567 user 0m0.308s 00:13:46.567 sys 0m0.782s 00:13:46.567 01:43:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.567 01:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:46.567 ************************************ 00:13:46.567 END TEST odd_alloc 00:13:46.567 ************************************ 00:13:46.825 01:43:46 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:13:46.825 01:43:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:46.825 01:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.825 01:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:46.825 ************************************ 00:13:46.825 START TEST custom_alloc 00:13:46.825 ************************************ 00:13:46.825 01:43:46 -- common/autotest_common.sh@1111 -- # custom_alloc 00:13:46.825 01:43:46 -- setup/hugepages.sh@167 -- # local IFS=, 00:13:46.825 01:43:46 -- setup/hugepages.sh@169 -- # local node 00:13:46.825 01:43:46 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:13:46.825 01:43:46 -- setup/hugepages.sh@170 -- # local nodes_hp 00:13:46.825 01:43:46 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:13:46.825 01:43:46 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:13:46.825 01:43:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:13:46.825 01:43:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:46.825 01:43:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:46.825 01:43:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:46.825 01:43:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:46.825 01:43:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:46.825 01:43:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:46.825 01:43:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@83 -- # : 0 00:13:46.825 01:43:46 -- setup/hugepages.sh@84 -- # : 0 00:13:46.825 01:43:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:13:46.825 01:43:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:13:46.825 01:43:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:13:46.825 01:43:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:46.825 01:43:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:46.825 01:43:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:46.825 01:43:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:46.825 01:43:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:46.825 01:43:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:13:46.825 01:43:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:46.825 01:43:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:13:46.825 01:43:46 -- setup/hugepages.sh@78 -- # return 0 00:13:46.825 01:43:46 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:13:46.825 01:43:46 -- setup/hugepages.sh@187 -- # setup output 00:13:46.825 01:43:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:46.825 01:43:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:47.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:47.083 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:47.341 01:43:47 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:13:47.341 01:43:47 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:13:47.341 01:43:47 -- setup/hugepages.sh@89 -- # local node 00:13:47.341 01:43:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:47.341 01:43:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:47.341 01:43:47 -- setup/hugepages.sh@92 -- # local surp 00:13:47.341 01:43:47 -- setup/hugepages.sh@93 -- # local resv 00:13:47.341 01:43:47 -- setup/hugepages.sh@94 -- # local anon 00:13:47.341 01:43:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:47.341 01:43:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:47.341 01:43:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:47.341 01:43:47 -- setup/common.sh@18 -- # local node= 00:13:47.341 01:43:47 -- setup/common.sh@19 -- # local var val 00:13:47.341 01:43:47 -- setup/common.sh@20 -- # local mem_f mem 00:13:47.341 01:43:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:47.341 01:43:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:47.341 01:43:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:47.341 01:43:47 -- setup/common.sh@28 -- # mapfile -t mem 00:13:47.341 01:43:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:47.341 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.341 01:43:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6020924 kB' 'MemAvailable: 10535288 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3770092 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142672 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627420 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 161168 kB' 'Mapped: 67472 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261924 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65168 kB' 'KernelStack: 4308 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:47.341 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.342 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.342 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.603 01:43:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:47.603 01:43:47 -- setup/common.sh@33 -- # echo 0 00:13:47.603 01:43:47 -- setup/common.sh@33 -- # return 0 00:13:47.603 01:43:47 -- setup/hugepages.sh@97 -- # anon=0 00:13:47.603 01:43:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:47.603 01:43:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:47.603 01:43:47 -- setup/common.sh@18 -- # local node= 00:13:47.603 01:43:47 -- setup/common.sh@19 -- # local var val 00:13:47.603 01:43:47 -- setup/common.sh@20 -- # local mem_f mem 00:13:47.603 01:43:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:47.603 01:43:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:47.603 01:43:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:47.603 01:43:47 -- setup/common.sh@28 -- # mapfile -t mem 00:13:47.603 01:43:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:47.603 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.603 01:43:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6020672 kB' 'MemAvailable: 10535036 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012760 kB' 'Inactive: 3770044 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142624 kB' 'Active(file): 1011704 kB' 'Inactive(file): 3627420 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 161152 kB' 'Mapped: 67428 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262052 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4276 kB' 'PageTables: 3544 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.604 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.604 01:43:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.605 01:43:47 -- setup/common.sh@33 -- # echo 0 00:13:47.605 01:43:47 -- setup/common.sh@33 -- # return 0 00:13:47.605 01:43:47 -- setup/hugepages.sh@99 -- # surp=0 00:13:47.605 01:43:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:47.605 01:43:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:47.605 01:43:47 -- setup/common.sh@18 -- # local node= 00:13:47.605 01:43:47 -- setup/common.sh@19 -- # local var val 00:13:47.605 01:43:47 -- setup/common.sh@20 -- # local mem_f mem 00:13:47.605 01:43:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:47.605 01:43:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:47.605 01:43:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:47.605 01:43:47 -- setup/common.sh@28 -- # mapfile -t mem 00:13:47.605 01:43:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6020412 kB' 'MemAvailable: 10534772 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769376 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141972 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 160552 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262044 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65288 kB' 'KernelStack: 4268 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.605 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.605 01:43:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.606 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:47.606 01:43:47 -- setup/common.sh@33 -- # echo 0 00:13:47.606 01:43:47 -- setup/common.sh@33 -- # return 0 00:13:47.606 01:43:47 -- setup/hugepages.sh@100 -- # resv=0 00:13:47.606 01:43:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:47.606 nr_hugepages=512 00:13:47.606 resv_hugepages=0 00:13:47.606 01:43:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:47.606 surplus_hugepages=0 00:13:47.606 01:43:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:47.606 anon_hugepages=0 00:13:47.606 01:43:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:47.606 01:43:47 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:47.606 01:43:47 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:47.606 01:43:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:47.606 01:43:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:47.606 01:43:47 -- setup/common.sh@18 -- # local node= 00:13:47.606 01:43:47 -- setup/common.sh@19 -- # local var val 00:13:47.606 01:43:47 -- setup/common.sh@20 -- # local mem_f mem 00:13:47.606 01:43:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:47.606 01:43:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:47.606 01:43:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:47.606 01:43:47 -- setup/common.sh@28 -- # mapfile -t mem 00:13:47.606 01:43:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:47.606 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6020412 kB' 'MemAvailable: 10534772 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769376 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141972 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 160812 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262044 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65288 kB' 'KernelStack: 4336 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 510164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.607 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.607 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:47.608 01:43:47 -- setup/common.sh@33 -- # echo 512 00:13:47.608 01:43:47 -- setup/common.sh@33 -- # return 0 00:13:47.608 01:43:47 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:47.608 01:43:47 -- setup/hugepages.sh@112 -- # get_nodes 00:13:47.608 01:43:47 -- setup/hugepages.sh@27 -- # local node 00:13:47.608 01:43:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:47.608 01:43:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:47.608 01:43:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:47.608 01:43:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:47.608 01:43:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:47.608 01:43:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:47.608 01:43:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:47.608 01:43:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:47.608 01:43:47 -- setup/common.sh@18 -- # local node=0 00:13:47.608 01:43:47 -- setup/common.sh@19 -- # local var val 00:13:47.608 01:43:47 -- setup/common.sh@20 -- # local mem_f mem 00:13:47.608 01:43:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:47.608 01:43:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:47.608 01:43:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:47.608 01:43:47 -- setup/common.sh@28 -- # mapfile -t mem 00:13:47.608 01:43:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:47.608 01:43:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6020412 kB' 'MemUsed: 6222552 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769896 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142492 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 4650780 kB' 'Mapped: 67212 kB' 'AnonPages: 161072 kB' 'Shmem: 2596 kB' 'KernelStack: 4404 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196756 kB' 'Slab: 262044 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.608 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.608 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # continue 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # IFS=': ' 00:13:47.609 01:43:47 -- setup/common.sh@31 -- # read -r var val _ 00:13:47.609 01:43:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:47.609 01:43:47 -- setup/common.sh@33 -- # echo 0 00:13:47.609 01:43:47 -- setup/common.sh@33 -- # return 0 00:13:47.609 01:43:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:47.609 01:43:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:47.609 01:43:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:47.609 01:43:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:47.609 01:43:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:47.609 node0=512 expecting 512 00:13:47.609 01:43:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:47.609 00:13:47.609 real 0m0.832s 00:13:47.609 user 0m0.300s 00:13:47.609 sys 0m0.579s 00:13:47.609 01:43:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.609 01:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.609 ************************************ 00:13:47.609 END TEST custom_alloc 00:13:47.609 ************************************ 00:13:47.609 01:43:47 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:13:47.609 01:43:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:47.609 01:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.609 01:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.609 ************************************ 00:13:47.609 START TEST no_shrink_alloc 00:13:47.609 ************************************ 00:13:47.609 01:43:47 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:13:47.609 01:43:47 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:13:47.609 01:43:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:47.609 01:43:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:47.609 01:43:47 -- setup/hugepages.sh@51 -- # shift 00:13:47.609 01:43:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:47.609 01:43:47 -- setup/hugepages.sh@52 -- # local node_ids 00:13:47.609 01:43:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:47.609 01:43:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:47.609 01:43:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:47.609 01:43:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:47.609 01:43:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:47.609 01:43:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:47.609 01:43:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:47.609 01:43:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:47.609 01:43:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:47.609 01:43:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:47.609 01:43:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:47.609 01:43:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:47.609 01:43:47 -- setup/hugepages.sh@73 -- # return 0 00:13:47.609 01:43:47 -- setup/hugepages.sh@198 -- # setup output 00:13:47.609 01:43:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:47.609 01:43:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:48.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:48.176 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:48.746 01:43:48 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:13:48.746 01:43:48 -- setup/hugepages.sh@89 -- # local node 00:13:48.746 01:43:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:48.746 01:43:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:48.746 01:43:48 -- setup/hugepages.sh@92 -- # local surp 00:13:48.746 01:43:48 -- setup/hugepages.sh@93 -- # local resv 00:13:48.746 01:43:48 -- setup/hugepages.sh@94 -- # local anon 00:13:48.746 01:43:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:48.746 01:43:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:48.746 01:43:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:48.746 01:43:48 -- setup/common.sh@18 -- # local node= 00:13:48.746 01:43:48 -- setup/common.sh@19 -- # local var val 00:13:48.746 01:43:48 -- setup/common.sh@20 -- # local mem_f mem 00:13:48.746 01:43:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:48.746 01:43:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:48.746 01:43:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:48.746 01:43:48 -- setup/common.sh@28 -- # mapfile -t mem 00:13:48.746 01:43:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4971336 kB' 'MemAvailable: 9485700 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012772 kB' 'Inactive: 3769656 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142248 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627408 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 160868 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262104 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65348 kB' 'KernelStack: 4284 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.746 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.746 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:48.747 01:43:48 -- setup/common.sh@33 -- # echo 0 00:13:48.747 01:43:48 -- setup/common.sh@33 -- # return 0 00:13:48.747 01:43:48 -- setup/hugepages.sh@97 -- # anon=0 00:13:48.747 01:43:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:48.747 01:43:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:48.747 01:43:48 -- setup/common.sh@18 -- # local node= 00:13:48.747 01:43:48 -- setup/common.sh@19 -- # local var val 00:13:48.747 01:43:48 -- setup/common.sh@20 -- # local mem_f mem 00:13:48.747 01:43:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:48.747 01:43:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:48.747 01:43:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:48.747 01:43:48 -- setup/common.sh@28 -- # mapfile -t mem 00:13:48.747 01:43:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4971336 kB' 'MemAvailable: 9485700 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012772 kB' 'Inactive: 3769356 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141948 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627408 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 160568 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262104 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65348 kB' 'KernelStack: 4268 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.747 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.747 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.748 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.748 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.749 01:43:48 -- setup/common.sh@33 -- # echo 0 00:13:48.749 01:43:48 -- setup/common.sh@33 -- # return 0 00:13:48.749 01:43:48 -- setup/hugepages.sh@99 -- # surp=0 00:13:48.749 01:43:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:48.749 01:43:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:48.749 01:43:48 -- setup/common.sh@18 -- # local node= 00:13:48.749 01:43:48 -- setup/common.sh@19 -- # local var val 00:13:48.749 01:43:48 -- setup/common.sh@20 -- # local mem_f mem 00:13:48.749 01:43:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:48.749 01:43:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:48.749 01:43:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:48.749 01:43:48 -- setup/common.sh@28 -- # mapfile -t mem 00:13:48.749 01:43:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4971588 kB' 'MemAvailable: 9485952 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769440 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142032 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627408 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 160676 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262144 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65388 kB' 'KernelStack: 4260 kB' 'PageTables: 3452 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.749 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.749 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.750 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:48.750 01:43:48 -- setup/common.sh@33 -- # echo 0 00:13:48.750 01:43:48 -- setup/common.sh@33 -- # return 0 00:13:48.750 01:43:48 -- setup/hugepages.sh@100 -- # resv=0 00:13:48.750 nr_hugepages=1024 00:13:48.750 01:43:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:48.750 resv_hugepages=0 00:13:48.750 01:43:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:48.750 surplus_hugepages=0 00:13:48.750 01:43:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:48.750 anon_hugepages=0 00:13:48.750 01:43:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:48.750 01:43:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:48.750 01:43:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:48.750 01:43:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:48.750 01:43:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:48.750 01:43:48 -- setup/common.sh@18 -- # local node= 00:13:48.750 01:43:48 -- setup/common.sh@19 -- # local var val 00:13:48.750 01:43:48 -- setup/common.sh@20 -- # local mem_f mem 00:13:48.750 01:43:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:48.750 01:43:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:48.750 01:43:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:48.750 01:43:48 -- setup/common.sh@28 -- # mapfile -t mem 00:13:48.750 01:43:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.750 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4972112 kB' 'MemAvailable: 9486476 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769784 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142376 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627408 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 160776 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262144 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65388 kB' 'KernelStack: 4324 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.751 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.751 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:48.752 01:43:48 -- setup/common.sh@33 -- # echo 1024 00:13:48.752 01:43:48 -- setup/common.sh@33 -- # return 0 00:13:48.752 01:43:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:48.752 01:43:48 -- setup/hugepages.sh@112 -- # get_nodes 00:13:48.752 01:43:48 -- setup/hugepages.sh@27 -- # local node 00:13:48.752 01:43:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:48.752 01:43:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:48.752 01:43:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:48.752 01:43:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:48.752 01:43:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:48.752 01:43:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:48.752 01:43:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:48.752 01:43:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:48.752 01:43:48 -- setup/common.sh@18 -- # local node=0 00:13:48.752 01:43:48 -- setup/common.sh@19 -- # local var val 00:13:48.752 01:43:48 -- setup/common.sh@20 -- # local mem_f mem 00:13:48.752 01:43:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:48.752 01:43:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:48.752 01:43:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:48.752 01:43:48 -- setup/common.sh@28 -- # mapfile -t mem 00:13:48.752 01:43:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4972112 kB' 'MemUsed: 7270852 kB' 'SwapCached: 0 kB' 'Active: 1012764 kB' 'Inactive: 3769836 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142428 kB' 'Active(file): 1011716 kB' 'Inactive(file): 3627408 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 4650784 kB' 'Mapped: 67176 kB' 'AnonPages: 161028 kB' 'Shmem: 2596 kB' 'KernelStack: 4360 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196756 kB' 'Slab: 262144 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.752 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.752 01:43:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # continue 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # IFS=': ' 00:13:48.753 01:43:48 -- setup/common.sh@31 -- # read -r var val _ 00:13:48.753 01:43:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:48.753 01:43:48 -- setup/common.sh@33 -- # echo 0 00:13:48.753 01:43:48 -- setup/common.sh@33 -- # return 0 00:13:48.753 01:43:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:48.753 01:43:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:48.753 01:43:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:48.753 01:43:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:48.754 01:43:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:48.754 node0=1024 expecting 1024 00:13:48.754 01:43:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:48.754 01:43:48 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:13:48.754 01:43:48 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:13:48.754 01:43:48 -- setup/hugepages.sh@202 -- # setup output 00:13:48.754 01:43:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:48.754 01:43:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:49.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:13:49.012 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:49.287 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:13:49.287 01:43:49 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:13:49.287 01:43:49 -- setup/hugepages.sh@89 -- # local node 00:13:49.287 01:43:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:49.287 01:43:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:49.287 01:43:49 -- setup/hugepages.sh@92 -- # local surp 00:13:49.287 01:43:49 -- setup/hugepages.sh@93 -- # local resv 00:13:49.287 01:43:49 -- setup/hugepages.sh@94 -- # local anon 00:13:49.287 01:43:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:49.287 01:43:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:49.287 01:43:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:49.287 01:43:49 -- setup/common.sh@18 -- # local node= 00:13:49.287 01:43:49 -- setup/common.sh@19 -- # local var val 00:13:49.287 01:43:49 -- setup/common.sh@20 -- # local mem_f mem 00:13:49.287 01:43:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:49.287 01:43:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:49.287 01:43:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:49.287 01:43:49 -- setup/common.sh@28 -- # mapfile -t mem 00:13:49.287 01:43:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4969644 kB' 'MemAvailable: 9484004 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012780 kB' 'Inactive: 3770424 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143028 kB' 'Active(file): 1011724 kB' 'Inactive(file): 3627396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 161324 kB' 'Mapped: 67396 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262072 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65316 kB' 'KernelStack: 4284 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.287 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.287 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:49.288 01:43:49 -- setup/common.sh@33 -- # echo 0 00:13:49.288 01:43:49 -- setup/common.sh@33 -- # return 0 00:13:49.288 01:43:49 -- setup/hugepages.sh@97 -- # anon=0 00:13:49.288 01:43:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:49.288 01:43:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:49.288 01:43:49 -- setup/common.sh@18 -- # local node= 00:13:49.288 01:43:49 -- setup/common.sh@19 -- # local var val 00:13:49.288 01:43:49 -- setup/common.sh@20 -- # local mem_f mem 00:13:49.288 01:43:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:49.288 01:43:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:49.288 01:43:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:49.288 01:43:49 -- setup/common.sh@28 -- # mapfile -t mem 00:13:49.288 01:43:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4969876 kB' 'MemAvailable: 9484236 kB' 'Buffers: 35992 kB' 'Cached: 4614788 kB' 'SwapCached: 0 kB' 'Active: 1012780 kB' 'Inactive: 3770336 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142940 kB' 'Active(file): 1011724 kB' 'Inactive(file): 3627396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 161188 kB' 'Mapped: 67396 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 261984 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65228 kB' 'KernelStack: 4344 kB' 'PageTables: 3800 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.288 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.288 01:43:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.289 01:43:49 -- setup/common.sh@33 -- # echo 0 00:13:49.289 01:43:49 -- setup/common.sh@33 -- # return 0 00:13:49.289 01:43:49 -- setup/hugepages.sh@99 -- # surp=0 00:13:49.289 01:43:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:49.289 01:43:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:49.289 01:43:49 -- setup/common.sh@18 -- # local node= 00:13:49.289 01:43:49 -- setup/common.sh@19 -- # local var val 00:13:49.289 01:43:49 -- setup/common.sh@20 -- # local mem_f mem 00:13:49.289 01:43:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:49.289 01:43:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:49.289 01:43:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:49.289 01:43:49 -- setup/common.sh@28 -- # mapfile -t mem 00:13:49.289 01:43:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4969876 kB' 'MemAvailable: 9484240 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012780 kB' 'Inactive: 3770024 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142624 kB' 'Active(file): 1011724 kB' 'Inactive(file): 3627400 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 160664 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262032 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65276 kB' 'KernelStack: 4304 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.289 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.289 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:49.290 01:43:49 -- setup/common.sh@33 -- # echo 0 00:13:49.290 01:43:49 -- setup/common.sh@33 -- # return 0 00:13:49.290 01:43:49 -- setup/hugepages.sh@100 -- # resv=0 00:13:49.290 01:43:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:49.290 nr_hugepages=1024 00:13:49.290 resv_hugepages=0 00:13:49.290 01:43:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:49.290 surplus_hugepages=0 00:13:49.290 01:43:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:49.290 anon_hugepages=0 00:13:49.290 01:43:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:49.290 01:43:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:49.290 01:43:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:49.290 01:43:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:49.290 01:43:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:49.290 01:43:49 -- setup/common.sh@18 -- # local node= 00:13:49.290 01:43:49 -- setup/common.sh@19 -- # local var val 00:13:49.290 01:43:49 -- setup/common.sh@20 -- # local mem_f mem 00:13:49.290 01:43:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:49.290 01:43:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:49.290 01:43:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:49.290 01:43:49 -- setup/common.sh@28 -- # mapfile -t mem 00:13:49.290 01:43:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4969876 kB' 'MemAvailable: 9484240 kB' 'Buffers: 35992 kB' 'Cached: 4614792 kB' 'SwapCached: 0 kB' 'Active: 1012780 kB' 'Inactive: 3770284 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142884 kB' 'Active(file): 1011724 kB' 'Inactive(file): 3627400 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 160924 kB' 'Mapped: 67176 kB' 'Shmem: 2596 kB' 'KReclaimable: 196756 kB' 'Slab: 262032 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65276 kB' 'KernelStack: 4372 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 510292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.290 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.290 01:43:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:49.291 01:43:49 -- setup/common.sh@33 -- # echo 1024 00:13:49.291 01:43:49 -- setup/common.sh@33 -- # return 0 00:13:49.291 01:43:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:49.291 01:43:49 -- setup/hugepages.sh@112 -- # get_nodes 00:13:49.291 01:43:49 -- setup/hugepages.sh@27 -- # local node 00:13:49.291 01:43:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:49.291 01:43:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:49.291 01:43:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:49.291 01:43:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:49.291 01:43:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:49.291 01:43:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:49.291 01:43:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:49.291 01:43:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:49.291 01:43:49 -- setup/common.sh@18 -- # local node=0 00:13:49.291 01:43:49 -- setup/common.sh@19 -- # local var val 00:13:49.291 01:43:49 -- setup/common.sh@20 -- # local mem_f mem 00:13:49.291 01:43:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:49.291 01:43:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:49.291 01:43:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:49.291 01:43:49 -- setup/common.sh@28 -- # mapfile -t mem 00:13:49.291 01:43:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4970140 kB' 'MemUsed: 7272824 kB' 'SwapCached: 0 kB' 'Active: 1012780 kB' 'Inactive: 3769828 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142428 kB' 'Active(file): 1011724 kB' 'Inactive(file): 3627400 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 4650784 kB' 'Mapped: 67176 kB' 'AnonPages: 160716 kB' 'Shmem: 2596 kB' 'KernelStack: 4308 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196756 kB' 'Slab: 262032 kB' 'SReclaimable: 196756 kB' 'SUnreclaim: 65276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.291 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.291 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # continue 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # IFS=': ' 00:13:49.292 01:43:49 -- setup/common.sh@31 -- # read -r var val _ 00:13:49.292 01:43:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:49.292 01:43:49 -- setup/common.sh@33 -- # echo 0 00:13:49.292 01:43:49 -- setup/common.sh@33 -- # return 0 00:13:49.292 01:43:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:49.292 01:43:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:49.292 01:43:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:49.292 01:43:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:49.292 01:43:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:49.292 node0=1024 expecting 1024 00:13:49.292 01:43:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:49.292 00:13:49.292 real 0m1.596s 00:13:49.292 user 0m0.656s 00:13:49.292 sys 0m1.034s 00:13:49.292 01:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.292 01:43:49 -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 ************************************ 00:13:49.292 END TEST no_shrink_alloc 00:13:49.292 ************************************ 00:13:49.292 01:43:49 -- setup/hugepages.sh@217 -- # clear_hp 00:13:49.292 01:43:49 -- setup/hugepages.sh@37 -- # local node hp 00:13:49.292 01:43:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:49.292 01:43:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:49.292 01:43:49 -- setup/hugepages.sh@41 -- # echo 0 00:13:49.292 01:43:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:49.292 01:43:49 -- setup/hugepages.sh@41 -- # echo 0 00:13:49.292 01:43:49 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:49.292 01:43:49 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:49.292 00:13:49.292 real 0m7.302s 00:13:49.292 user 0m2.663s 00:13:49.292 sys 0m4.920s 00:13:49.292 01:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.292 ************************************ 00:13:49.292 01:43:49 -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 END TEST hugepages 00:13:49.292 ************************************ 00:13:49.292 01:43:49 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:49.292 01:43:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:49.292 01:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.292 01:43:49 -- common/autotest_common.sh@10 -- # set +x 00:13:49.552 ************************************ 00:13:49.552 START TEST driver 00:13:49.552 ************************************ 00:13:49.552 01:43:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:49.552 * Looking for test storage... 00:13:49.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:49.552 01:43:49 -- setup/driver.sh@68 -- # setup reset 00:13:49.552 01:43:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:49.552 01:43:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:50.118 01:43:50 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:13:50.118 01:43:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:50.118 01:43:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.118 01:43:50 -- common/autotest_common.sh@10 -- # set +x 00:13:50.118 ************************************ 00:13:50.118 START TEST guess_driver 00:13:50.118 ************************************ 00:13:50.118 01:43:50 -- common/autotest_common.sh@1111 -- # guess_driver 00:13:50.118 01:43:50 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:13:50.118 01:43:50 -- setup/driver.sh@47 -- # local fail=0 00:13:50.118 01:43:50 -- setup/driver.sh@49 -- # pick_driver 00:13:50.118 01:43:50 -- setup/driver.sh@36 -- # vfio 00:13:50.118 01:43:50 -- setup/driver.sh@21 -- # local iommu_grups 00:13:50.118 01:43:50 -- setup/driver.sh@22 -- # local unsafe_vfio 00:13:50.118 01:43:50 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:13:50.119 01:43:50 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:13:50.119 01:43:50 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:13:50.119 01:43:50 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:13:50.119 01:43:50 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:13:50.119 01:43:50 -- setup/driver.sh@32 -- # return 1 00:13:50.119 01:43:50 -- setup/driver.sh@38 -- # uio 00:13:50.119 01:43:50 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:13:50.119 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:13:50.119 01:43:50 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:13:50.119 Looking for driver=uio_pci_generic 00:13:50.119 01:43:50 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:13:50.119 01:43:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:50.119 01:43:50 -- setup/driver.sh@45 -- # setup output config 00:13:50.119 01:43:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:50.119 01:43:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:50.683 01:43:50 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:13:50.683 01:43:50 -- setup/driver.sh@58 -- # continue 00:13:50.683 01:43:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:50.683 01:43:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:50.683 01:43:50 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:13:50.683 01:43:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:51.618 01:43:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:13:51.618 01:43:51 -- setup/driver.sh@65 -- # setup reset 00:13:51.618 01:43:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:51.618 01:43:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:52.184 ************************************ 00:13:52.184 END TEST guess_driver 00:13:52.184 ************************************ 00:13:52.184 00:13:52.184 real 0m2.072s 00:13:52.184 user 0m0.459s 00:13:52.184 sys 0m1.631s 00:13:52.184 01:43:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.184 01:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:52.184 00:13:52.184 real 0m2.864s 00:13:52.184 user 0m0.803s 00:13:52.184 sys 0m2.090s 00:13:52.184 01:43:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.442 01:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:52.442 ************************************ 00:13:52.442 END TEST driver 00:13:52.442 ************************************ 00:13:52.442 01:43:52 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:52.442 01:43:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:52.442 01:43:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.442 01:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:52.442 ************************************ 00:13:52.442 START TEST devices 00:13:52.442 ************************************ 00:13:52.442 01:43:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:52.442 * Looking for test storage... 00:13:52.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:52.442 01:43:52 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:13:52.442 01:43:52 -- setup/devices.sh@192 -- # setup reset 00:13:52.442 01:43:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:52.442 01:43:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:53.102 01:43:53 -- setup/devices.sh@194 -- # get_zoned_devs 00:13:53.102 01:43:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:53.102 01:43:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:53.102 01:43:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:53.102 01:43:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:53.102 01:43:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:53.102 01:43:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:53.102 01:43:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:53.102 01:43:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:53.102 01:43:53 -- setup/devices.sh@196 -- # blocks=() 00:13:53.102 01:43:53 -- setup/devices.sh@196 -- # declare -a blocks 00:13:53.102 01:43:53 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:13:53.102 01:43:53 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:13:53.102 01:43:53 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:13:53.102 01:43:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:53.102 01:43:53 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:13:53.102 01:43:53 -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:53.102 01:43:53 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:13:53.102 01:43:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:53.102 01:43:53 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:13:53.102 01:43:53 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:53.102 01:43:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:13:53.102 No valid GPT data, bailing 00:13:53.102 01:43:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:53.102 01:43:53 -- scripts/common.sh@391 -- # pt= 00:13:53.102 01:43:53 -- scripts/common.sh@392 -- # return 1 00:13:53.102 01:43:53 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:13:53.102 01:43:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:53.102 01:43:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:53.102 01:43:53 -- setup/common.sh@80 -- # echo 5368709120 00:13:53.102 01:43:53 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:13:53.102 01:43:53 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:53.102 01:43:53 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:13:53.102 01:43:53 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:13:53.102 01:43:53 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:13:53.102 01:43:53 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:13:53.102 01:43:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.102 01:43:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.102 01:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:53.102 ************************************ 00:13:53.102 START TEST nvme_mount 00:13:53.102 ************************************ 00:13:53.102 01:43:53 -- common/autotest_common.sh@1111 -- # nvme_mount 00:13:53.102 01:43:53 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:13:53.102 01:43:53 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:13:53.102 01:43:53 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:53.102 01:43:53 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:53.102 01:43:53 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:13:53.102 01:43:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:53.102 01:43:53 -- setup/common.sh@40 -- # local part_no=1 00:13:53.102 01:43:53 -- setup/common.sh@41 -- # local size=1073741824 00:13:53.102 01:43:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:53.102 01:43:53 -- setup/common.sh@44 -- # parts=() 00:13:53.102 01:43:53 -- setup/common.sh@44 -- # local parts 00:13:53.102 01:43:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:13:53.102 01:43:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:53.102 01:43:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:53.102 01:43:53 -- setup/common.sh@46 -- # (( part++ )) 00:13:53.102 01:43:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:53.102 01:43:53 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:53.102 01:43:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:53.102 01:43:53 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:13:54.474 Creating new GPT entries in memory. 00:13:54.474 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:54.474 other utilities. 00:13:54.474 01:43:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:13:54.474 01:43:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:54.474 01:43:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:54.474 01:43:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:54.474 01:43:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:13:55.407 Creating new GPT entries in memory. 00:13:55.407 The operation has completed successfully. 00:13:55.407 01:43:55 -- setup/common.sh@57 -- # (( part++ )) 00:13:55.407 01:43:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:55.407 01:43:55 -- setup/common.sh@62 -- # wait 103970 00:13:55.407 01:43:55 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:55.407 01:43:55 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:13:55.407 01:43:55 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:55.407 01:43:55 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:13:55.407 01:43:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:13:55.407 01:43:55 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:55.407 01:43:55 -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:55.407 01:43:55 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:13:55.407 01:43:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:13:55.407 01:43:55 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:55.407 01:43:55 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:55.407 01:43:55 -- setup/devices.sh@53 -- # local found=0 00:13:55.407 01:43:55 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:55.407 01:43:55 -- setup/devices.sh@56 -- # : 00:13:55.407 01:43:55 -- setup/devices.sh@59 -- # local pci status 00:13:55.407 01:43:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:13:55.407 01:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:55.407 01:43:55 -- setup/devices.sh@47 -- # setup output config 00:13:55.407 01:43:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:55.407 01:43:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:55.407 01:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:55.407 01:43:55 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:13:55.407 01:43:55 -- setup/devices.sh@63 -- # found=1 00:13:55.407 01:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:55.407 01:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:55.407 01:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:55.664 01:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:55.664 01:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:56.598 01:43:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:56.598 01:43:56 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:56.598 01:43:56 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:56.598 01:43:56 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:56.598 01:43:56 -- setup/devices.sh@110 -- # cleanup_nvme 00:13:56.598 01:43:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:56.598 01:43:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:56.598 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:56.598 01:43:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:56.598 01:43:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:56.598 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:56.598 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:56.598 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:56.598 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:56.598 01:43:56 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:13:56.598 01:43:56 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:13:56.598 01:43:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:13:56.598 01:43:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:13:56.598 01:43:56 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:56.598 01:43:56 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:13:56.598 01:43:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:13:56.598 01:43:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:56.598 01:43:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:56.598 01:43:56 -- setup/devices.sh@53 -- # local found=0 00:13:56.598 01:43:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:56.598 01:43:56 -- setup/devices.sh@56 -- # : 00:13:56.598 01:43:56 -- setup/devices.sh@59 -- # local pci status 00:13:56.598 01:43:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:56.598 01:43:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:13:56.598 01:43:56 -- setup/devices.sh@47 -- # setup output config 00:13:56.598 01:43:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:56.598 01:43:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:56.857 01:43:56 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:56.857 01:43:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:13:56.857 01:43:56 -- setup/devices.sh@63 -- # found=1 00:13:56.857 01:43:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:56.857 01:43:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:56.857 01:43:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:57.121 01:43:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:57.121 01:43:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:58.064 01:43:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:58.064 01:43:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:58.065 01:43:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:58.065 01:43:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:58.065 01:43:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:58.065 01:43:57 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:58.065 01:43:57 -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:13:58.065 01:43:57 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:13:58.065 01:43:57 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:13:58.065 01:43:57 -- setup/devices.sh@50 -- # local mount_point= 00:13:58.065 01:43:57 -- setup/devices.sh@51 -- # local test_file= 00:13:58.065 01:43:57 -- setup/devices.sh@53 -- # local found=0 00:13:58.065 01:43:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:58.065 01:43:57 -- setup/devices.sh@59 -- # local pci status 00:13:58.065 01:43:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:58.065 01:43:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:13:58.065 01:43:57 -- setup/devices.sh@47 -- # setup output config 00:13:58.065 01:43:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:58.065 01:43:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:58.323 01:43:58 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:58.323 01:43:58 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:13:58.323 01:43:58 -- setup/devices.sh@63 -- # found=1 00:13:58.323 01:43:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:58.323 01:43:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:58.323 01:43:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:58.323 01:43:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:13:58.323 01:43:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:59.258 01:43:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:59.258 01:43:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:59.258 01:43:59 -- setup/devices.sh@68 -- # return 0 00:13:59.258 01:43:59 -- setup/devices.sh@128 -- # cleanup_nvme 00:13:59.258 01:43:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:59.258 01:43:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:59.258 01:43:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:59.258 01:43:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:59.258 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:59.258 00:13:59.258 real 0m6.156s 00:13:59.258 user 0m0.691s 00:13:59.258 sys 0m3.531s 00:13:59.258 01:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.258 ************************************ 00:13:59.258 01:43:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.258 END TEST nvme_mount 00:13:59.258 ************************************ 00:13:59.258 01:43:59 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:13:59.258 01:43:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.258 01:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.258 01:43:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.517 ************************************ 00:13:59.517 START TEST dm_mount 00:13:59.517 ************************************ 00:13:59.517 01:43:59 -- common/autotest_common.sh@1111 -- # dm_mount 00:13:59.517 01:43:59 -- setup/devices.sh@144 -- # pv=nvme0n1 00:13:59.517 01:43:59 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:13:59.517 01:43:59 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:13:59.517 01:43:59 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:13:59.517 01:43:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:59.517 01:43:59 -- setup/common.sh@40 -- # local part_no=2 00:13:59.517 01:43:59 -- setup/common.sh@41 -- # local size=1073741824 00:13:59.517 01:43:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:59.517 01:43:59 -- setup/common.sh@44 -- # parts=() 00:13:59.517 01:43:59 -- setup/common.sh@44 -- # local parts 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:59.517 01:43:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part++ )) 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:59.517 01:43:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part++ )) 00:13:59.517 01:43:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:59.517 01:43:59 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:59.517 01:43:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:59.518 01:43:59 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:00.460 Creating new GPT entries in memory. 00:14:00.460 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:00.460 other utilities. 00:14:00.460 01:44:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:14:00.461 01:44:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:00.461 01:44:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:00.461 01:44:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:00.461 01:44:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:01.423 Creating new GPT entries in memory. 00:14:01.423 The operation has completed successfully. 00:14:01.423 01:44:01 -- setup/common.sh@57 -- # (( part++ )) 00:14:01.423 01:44:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:01.423 01:44:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:01.423 01:44:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:01.423 01:44:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:02.812 The operation has completed successfully. 00:14:02.812 01:44:02 -- setup/common.sh@57 -- # (( part++ )) 00:14:02.812 01:44:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:02.812 01:44:02 -- setup/common.sh@62 -- # wait 104456 00:14:02.812 01:44:02 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:02.812 01:44:02 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:02.812 01:44:02 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:02.812 01:44:02 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:02.812 01:44:02 -- setup/devices.sh@160 -- # for t in {1..5} 00:14:02.812 01:44:02 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:02.812 01:44:02 -- setup/devices.sh@161 -- # break 00:14:02.812 01:44:02 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:02.812 01:44:02 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:02.812 01:44:02 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:02.812 01:44:02 -- setup/devices.sh@166 -- # dm=dm-0 00:14:02.812 01:44:02 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:02.812 01:44:02 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:02.812 01:44:02 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:02.812 01:44:02 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:02.812 01:44:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:02.812 01:44:02 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:02.812 01:44:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:02.812 01:44:02 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:02.812 01:44:02 -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:02.812 01:44:02 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:14:02.812 01:44:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:02.812 01:44:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:02.812 01:44:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:02.812 01:44:02 -- setup/devices.sh@53 -- # local found=0 00:14:02.812 01:44:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:02.812 01:44:02 -- setup/devices.sh@56 -- # : 00:14:02.812 01:44:02 -- setup/devices.sh@59 -- # local pci status 00:14:02.812 01:44:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:02.812 01:44:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:14:02.812 01:44:02 -- setup/devices.sh@47 -- # setup output config 00:14:02.812 01:44:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:02.812 01:44:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:02.812 01:44:02 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:02.812 01:44:02 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:02.812 01:44:02 -- setup/devices.sh@63 -- # found=1 00:14:02.812 01:44:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:02.812 01:44:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:02.812 01:44:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:03.071 01:44:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:03.071 01:44:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:04.009 01:44:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:04.009 01:44:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:04.009 01:44:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:04.009 01:44:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:04.009 01:44:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:04.009 01:44:03 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:04.009 01:44:03 -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:04.009 01:44:03 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:14:04.009 01:44:03 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:04.009 01:44:03 -- setup/devices.sh@50 -- # local mount_point= 00:14:04.009 01:44:03 -- setup/devices.sh@51 -- # local test_file= 00:14:04.009 01:44:03 -- setup/devices.sh@53 -- # local found=0 00:14:04.009 01:44:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:04.009 01:44:03 -- setup/devices.sh@59 -- # local pci status 00:14:04.009 01:44:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:04.009 01:44:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:14:04.009 01:44:03 -- setup/devices.sh@47 -- # setup output config 00:14:04.009 01:44:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:04.009 01:44:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:04.268 01:44:04 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:04.268 01:44:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:04.268 01:44:04 -- setup/devices.sh@63 -- # found=1 00:14:04.268 01:44:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:04.268 01:44:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:04.268 01:44:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:04.268 01:44:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:14:04.268 01:44:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:05.204 01:44:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:05.204 01:44:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:05.204 01:44:05 -- setup/devices.sh@68 -- # return 0 00:14:05.204 01:44:05 -- setup/devices.sh@187 -- # cleanup_dm 00:14:05.204 01:44:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:05.204 01:44:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:05.204 01:44:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:05.463 01:44:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:05.463 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:05.463 01:44:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:05.463 00:14:05.463 real 0m5.922s 00:14:05.463 user 0m0.419s 00:14:05.463 sys 0m2.416s 00:14:05.463 01:44:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.463 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:14:05.463 ************************************ 00:14:05.463 END TEST dm_mount 00:14:05.463 ************************************ 00:14:05.463 01:44:05 -- setup/devices.sh@1 -- # cleanup 00:14:05.463 01:44:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:14:05.463 01:44:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:05.463 01:44:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:05.463 01:44:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:05.463 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:05.463 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:05.463 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:05.463 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:05.463 01:44:05 -- setup/devices.sh@12 -- # cleanup_dm 00:14:05.463 01:44:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:05.463 01:44:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:05.463 01:44:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:05.463 01:44:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:05.463 00:14:05.463 real 0m13.070s 00:14:05.463 user 0m1.596s 00:14:05.463 sys 0m6.456s 00:14:05.463 01:44:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.463 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:14:05.463 ************************************ 00:14:05.463 END TEST devices 00:14:05.463 ************************************ 00:14:05.463 00:14:05.463 real 0m29.235s 00:14:05.463 user 0m7.052s 00:14:05.463 sys 0m17.613s 00:14:05.463 01:44:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.463 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:14:05.463 ************************************ 00:14:05.463 END TEST setup.sh 00:14:05.463 ************************************ 00:14:05.463 01:44:05 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:06.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:14:06.031 Hugepages 00:14:06.031 node hugesize free / total 00:14:06.031 node0 1048576kB 0 / 0 00:14:06.031 node0 2048kB 2048 / 2048 00:14:06.031 00:14:06.031 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:06.290 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:06.290 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:06.290 01:44:06 -- spdk/autotest.sh@130 -- # uname -s 00:14:06.290 01:44:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:06.290 01:44:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:06.290 01:44:06 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:06.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:14:06.858 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:07.792 01:44:07 -- common/autotest_common.sh@1518 -- # sleep 1 00:14:08.725 01:44:08 -- common/autotest_common.sh@1519 -- # bdfs=() 00:14:08.725 01:44:08 -- common/autotest_common.sh@1519 -- # local bdfs 00:14:08.725 01:44:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:14:08.725 01:44:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:14:08.725 01:44:08 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:08.725 01:44:08 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:08.725 01:44:08 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:08.725 01:44:08 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:08.725 01:44:08 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:08.982 01:44:08 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:14:08.982 01:44:08 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:14:08.983 01:44:08 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:09.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:14:09.241 Waiting for block devices as requested 00:14:09.501 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.501 01:44:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:09.501 01:44:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:14:09.501 01:44:09 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:14:09.501 01:44:09 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:14:09.501 01:44:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:09.501 01:44:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:09.501 01:44:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:09.501 01:44:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:09.501 01:44:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:09.501 01:44:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:14:09.501 01:44:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:09.501 01:44:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:09.501 01:44:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:09.501 01:44:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:09.501 01:44:09 -- common/autotest_common.sh@1543 -- # continue 00:14:09.501 01:44:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:09.501 01:44:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.501 01:44:09 -- common/autotest_common.sh@10 -- # set +x 00:14:09.501 01:44:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:09.501 01:44:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:09.501 01:44:09 -- common/autotest_common.sh@10 -- # set +x 00:14:09.501 01:44:09 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:10.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:14:10.069 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.003 01:44:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:11.003 01:44:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:11.003 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:14:11.003 01:44:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:11.003 01:44:11 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:14:11.003 01:44:11 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:14:11.003 01:44:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:14:11.003 01:44:11 -- common/autotest_common.sh@1563 -- # local bdfs 00:14:11.003 01:44:11 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:14:11.003 01:44:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:11.003 01:44:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:11.003 01:44:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:11.003 01:44:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:11.003 01:44:11 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:11.262 01:44:11 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:14:11.262 01:44:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:14:11.262 01:44:11 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:14:11.262 01:44:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:11.262 01:44:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:11.262 01:44:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:11.262 01:44:11 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:14:11.262 01:44:11 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:14:11.262 01:44:11 -- common/autotest_common.sh@1579 -- # return 0 00:14:11.262 01:44:11 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:14:11.262 01:44:11 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:11.262 01:44:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:11.262 01:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.262 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:14:11.262 ************************************ 00:14:11.262 START TEST unittest 00:14:11.262 ************************************ 00:14:11.262 01:44:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:11.262 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:11.262 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:14:11.262 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:14:11.262 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:11.262 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:14:11.262 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:11.262 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:11.262 ++ rpc_py=rpc_cmd 00:14:11.262 ++ set -e 00:14:11.262 ++ shopt -s nullglob 00:14:11.262 ++ shopt -s extglob 00:14:11.262 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:11.262 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:11.262 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:11.262 +++ CONFIG_WPDK_DIR= 00:14:11.262 +++ CONFIG_ASAN=y 00:14:11.262 +++ CONFIG_VBDEV_COMPRESS=n 00:14:11.262 +++ CONFIG_HAVE_EXECINFO_H=y 00:14:11.262 +++ CONFIG_USDT=n 00:14:11.262 +++ CONFIG_CUSTOMOCF=n 00:14:11.262 +++ CONFIG_PREFIX=/usr/local 00:14:11.262 +++ CONFIG_RBD=n 00:14:11.262 +++ CONFIG_LIBDIR= 00:14:11.262 +++ CONFIG_IDXD=y 00:14:11.262 +++ CONFIG_NVME_CUSE=y 00:14:11.262 +++ CONFIG_SMA=n 00:14:11.262 +++ CONFIG_VTUNE=n 00:14:11.262 +++ CONFIG_TSAN=n 00:14:11.262 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:11.262 +++ CONFIG_VFIO_USER_DIR= 00:14:11.262 +++ CONFIG_PGO_CAPTURE=n 00:14:11.262 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:11.263 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:11.263 +++ CONFIG_LTO=n 00:14:11.263 +++ CONFIG_ISCSI_INITIATOR=y 00:14:11.263 +++ CONFIG_CET=n 00:14:11.263 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:11.263 +++ CONFIG_OCF_PATH= 00:14:11.263 +++ CONFIG_RDMA_SET_TOS=y 00:14:11.263 +++ CONFIG_HAVE_ARC4RANDOM=n 00:14:11.263 +++ CONFIG_HAVE_LIBARCHIVE=n 00:14:11.263 +++ CONFIG_UBLK=n 00:14:11.263 +++ CONFIG_ISAL_CRYPTO=y 00:14:11.263 +++ CONFIG_OPENSSL_PATH= 00:14:11.263 +++ CONFIG_OCF=n 00:14:11.263 +++ CONFIG_FUSE=n 00:14:11.263 +++ CONFIG_VTUNE_DIR= 00:14:11.263 +++ CONFIG_FUZZER_LIB= 00:14:11.263 +++ CONFIG_FUZZER=n 00:14:11.263 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:11.263 +++ CONFIG_CRYPTO=n 00:14:11.263 +++ CONFIG_PGO_USE=n 00:14:11.263 +++ CONFIG_VHOST=y 00:14:11.263 +++ CONFIG_DAOS=n 00:14:11.263 +++ CONFIG_DPDK_INC_DIR= 00:14:11.263 +++ CONFIG_DAOS_DIR= 00:14:11.263 +++ CONFIG_UNIT_TESTS=y 00:14:11.263 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:11.263 +++ CONFIG_VIRTIO=y 00:14:11.263 +++ CONFIG_COVERAGE=y 00:14:11.263 +++ CONFIG_RDMA=y 00:14:11.263 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:11.263 +++ CONFIG_URING_PATH= 00:14:11.263 +++ CONFIG_XNVME=n 00:14:11.263 +++ CONFIG_VFIO_USER=n 00:14:11.263 +++ CONFIG_ARCH=native 00:14:11.263 +++ CONFIG_HAVE_EVP_MAC=y 00:14:11.263 +++ CONFIG_URING_ZNS=n 00:14:11.263 +++ CONFIG_WERROR=y 00:14:11.263 +++ CONFIG_HAVE_LIBBSD=n 00:14:11.263 +++ CONFIG_UBSAN=y 00:14:11.263 +++ CONFIG_IPSEC_MB_DIR= 00:14:11.263 +++ CONFIG_GOLANG=n 00:14:11.263 +++ CONFIG_ISAL=y 00:14:11.263 +++ CONFIG_IDXD_KERNEL=n 00:14:11.263 +++ CONFIG_DPDK_LIB_DIR= 00:14:11.263 +++ CONFIG_RDMA_PROV=verbs 00:14:11.263 +++ CONFIG_APPS=y 00:14:11.263 +++ CONFIG_SHARED=n 00:14:11.263 +++ CONFIG_HAVE_KEYUTILS=y 00:14:11.263 +++ CONFIG_FC_PATH= 00:14:11.263 +++ CONFIG_DPDK_PKG_CONFIG=n 00:14:11.263 +++ CONFIG_FC=n 00:14:11.263 +++ CONFIG_AVAHI=n 00:14:11.263 +++ CONFIG_FIO_PLUGIN=y 00:14:11.263 +++ CONFIG_RAID5F=y 00:14:11.263 +++ CONFIG_EXAMPLES=y 00:14:11.263 +++ CONFIG_TESTS=y 00:14:11.263 +++ CONFIG_CRYPTO_MLX5=n 00:14:11.263 +++ CONFIG_MAX_LCORES= 00:14:11.263 +++ CONFIG_IPSEC_MB=n 00:14:11.263 +++ CONFIG_PGO_DIR= 00:14:11.263 +++ CONFIG_DEBUG=y 00:14:11.263 +++ CONFIG_DPDK_COMPRESSDEV=n 00:14:11.263 +++ CONFIG_CROSS_PREFIX= 00:14:11.263 +++ CONFIG_URING=n 00:14:11.263 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:11.263 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:11.263 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:11.263 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:11.263 +++ _root=/home/vagrant/spdk_repo/spdk 00:14:11.263 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:11.263 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:11.263 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:11.263 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:11.263 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:11.263 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:11.263 +++ VHOST_APP=("$_app_dir/vhost") 00:14:11.263 +++ DD_APP=("$_app_dir/spdk_dd") 00:14:11.263 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:14:11.263 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:11.263 +++ [[ #ifndef SPDK_CONFIG_H 00:14:11.263 #define SPDK_CONFIG_H 00:14:11.263 #define SPDK_CONFIG_APPS 1 00:14:11.263 #define SPDK_CONFIG_ARCH native 00:14:11.263 #define SPDK_CONFIG_ASAN 1 00:14:11.263 #undef SPDK_CONFIG_AVAHI 00:14:11.263 #undef SPDK_CONFIG_CET 00:14:11.263 #define SPDK_CONFIG_COVERAGE 1 00:14:11.263 #define SPDK_CONFIG_CROSS_PREFIX 00:14:11.263 #undef SPDK_CONFIG_CRYPTO 00:14:11.263 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:11.263 #undef SPDK_CONFIG_CUSTOMOCF 00:14:11.263 #undef SPDK_CONFIG_DAOS 00:14:11.263 #define SPDK_CONFIG_DAOS_DIR 00:14:11.263 #define SPDK_CONFIG_DEBUG 1 00:14:11.263 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:11.263 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:11.263 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:11.263 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:11.263 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:11.263 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:11.263 #define SPDK_CONFIG_EXAMPLES 1 00:14:11.263 #undef SPDK_CONFIG_FC 00:14:11.263 #define SPDK_CONFIG_FC_PATH 00:14:11.263 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:11.263 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:11.263 #undef SPDK_CONFIG_FUSE 00:14:11.263 #undef SPDK_CONFIG_FUZZER 00:14:11.263 #define SPDK_CONFIG_FUZZER_LIB 00:14:11.263 #undef SPDK_CONFIG_GOLANG 00:14:11.263 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:14:11.263 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:11.263 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:11.263 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:11.263 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:11.263 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:11.263 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:11.263 #define SPDK_CONFIG_IDXD 1 00:14:11.263 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:11.263 #undef SPDK_CONFIG_IPSEC_MB 00:14:11.263 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:11.263 #define SPDK_CONFIG_ISAL 1 00:14:11.263 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:11.263 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:11.263 #define SPDK_CONFIG_LIBDIR 00:14:11.263 #undef SPDK_CONFIG_LTO 00:14:11.263 #define SPDK_CONFIG_MAX_LCORES 00:14:11.263 #define SPDK_CONFIG_NVME_CUSE 1 00:14:11.263 #undef SPDK_CONFIG_OCF 00:14:11.263 #define SPDK_CONFIG_OCF_PATH 00:14:11.263 #define SPDK_CONFIG_OPENSSL_PATH 00:14:11.263 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:11.263 #define SPDK_CONFIG_PGO_DIR 00:14:11.263 #undef SPDK_CONFIG_PGO_USE 00:14:11.263 #define SPDK_CONFIG_PREFIX /usr/local 00:14:11.263 #define SPDK_CONFIG_RAID5F 1 00:14:11.263 #undef SPDK_CONFIG_RBD 00:14:11.263 #define SPDK_CONFIG_RDMA 1 00:14:11.263 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:11.263 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:11.263 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:11.263 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:11.263 #undef SPDK_CONFIG_SHARED 00:14:11.263 #undef SPDK_CONFIG_SMA 00:14:11.263 #define SPDK_CONFIG_TESTS 1 00:14:11.263 #undef SPDK_CONFIG_TSAN 00:14:11.263 #undef SPDK_CONFIG_UBLK 00:14:11.263 #define SPDK_CONFIG_UBSAN 1 00:14:11.263 #define SPDK_CONFIG_UNIT_TESTS 1 00:14:11.263 #undef SPDK_CONFIG_URING 00:14:11.263 #define SPDK_CONFIG_URING_PATH 00:14:11.263 #undef SPDK_CONFIG_URING_ZNS 00:14:11.263 #undef SPDK_CONFIG_USDT 00:14:11.263 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:11.263 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:11.263 #undef SPDK_CONFIG_VFIO_USER 00:14:11.263 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:11.263 #define SPDK_CONFIG_VHOST 1 00:14:11.263 #define SPDK_CONFIG_VIRTIO 1 00:14:11.263 #undef SPDK_CONFIG_VTUNE 00:14:11.263 #define SPDK_CONFIG_VTUNE_DIR 00:14:11.263 #define SPDK_CONFIG_WERROR 1 00:14:11.263 #define SPDK_CONFIG_WPDK_DIR 00:14:11.263 #undef SPDK_CONFIG_XNVME 00:14:11.263 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:11.263 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:11.263 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.263 +++ [[ -e /bin/wpdk_common.sh ]] 00:14:11.263 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.263 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.263 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:14:11.263 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:14:11.263 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:14:11.263 ++++ export PATH 00:14:11.263 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:14:11.263 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:11.263 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:11.263 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:11.263 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:11.263 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:11.263 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:11.263 +++ TEST_TAG=N/A 00:14:11.263 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:11.263 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:11.263 ++++ uname -s 00:14:11.263 +++ PM_OS=Linux 00:14:11.263 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:11.264 +++ [[ Linux == FreeBSD ]] 00:14:11.264 +++ [[ Linux == Linux ]] 00:14:11.264 +++ [[ QEMU != QEMU ]] 00:14:11.264 +++ MONITOR_RESOURCES_PIDS=() 00:14:11.264 +++ declare -A MONITOR_RESOURCES_PIDS 00:14:11.264 +++ mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:14:11.264 ++ : 0 00:14:11.264 ++ export RUN_NIGHTLY 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_RUN_VALGRIND 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_TEST_UNITTEST 00:14:11.264 ++ : 00:14:11.264 ++ export SPDK_TEST_AUTOBUILD 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_RELEASE_BUILD 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_ISAL 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_ISCSI 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_ISCSI_INITIATOR 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_TEST_NVME 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVME_PMR 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVME_BP 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVME_CLI 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVME_CUSE 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVME_FDP 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVMF 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VFIOUSER 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VFIOUSER_QEMU 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_FUZZER 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_FUZZER_SHORT 00:14:11.264 ++ : rdma 00:14:11.264 ++ export SPDK_TEST_NVMF_TRANSPORT 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_RBD 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VHOST 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_TEST_BLOCKDEV 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_IOAT 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_BLOBFS 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VHOST_INIT 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_LVOL 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VBDEV_COMPRESS 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_RUN_ASAN 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_RUN_UBSAN 00:14:11.264 ++ : 00:14:11.264 ++ export SPDK_RUN_EXTERNAL_DPDK 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_RUN_NON_ROOT 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_CRYPTO 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_FTL 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_OCF 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_VMD 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_OPAL 00:14:11.264 ++ : 00:14:11.264 ++ export SPDK_TEST_NATIVE_DPDK 00:14:11.264 ++ : true 00:14:11.264 ++ export SPDK_AUTOTEST_X 00:14:11.264 ++ : 1 00:14:11.264 ++ export SPDK_TEST_RAID5 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_URING 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_USDT 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_USE_IGB_UIO 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_SCHEDULER 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_SCANBUILD 00:14:11.264 ++ : 00:14:11.264 ++ export SPDK_TEST_NVMF_NICS 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_SMA 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_DAOS 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_XNVME 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_ACCEL_DSA 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_ACCEL_IAA 00:14:11.264 ++ : 00:14:11.264 ++ export SPDK_TEST_FUZZER_TARGET 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_TEST_NVMF_MDNS 00:14:11.264 ++ : 0 00:14:11.264 ++ export SPDK_JSONRPC_GO_CLIENT 00:14:11.264 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:11.264 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:11.264 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:11.264 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:11.264 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:11.264 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:11.264 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:11.264 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:11.264 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:11.264 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:14:11.264 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:11.264 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:11.264 ++ export PYTHONDONTWRITEBYTECODE=1 00:14:11.264 ++ PYTHONDONTWRITEBYTECODE=1 00:14:11.264 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:11.264 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:11.264 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:11.264 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:11.264 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:14:11.264 ++ rm -rf /var/tmp/asan_suppression_file 00:14:11.264 ++ cat 00:14:11.264 ++ echo leak:libfuse3.so 00:14:11.264 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:11.264 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:11.264 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:11.264 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:11.264 ++ '[' -z /var/spdk/dependencies ']' 00:14:11.264 ++ export DEPENDENCY_DIR 00:14:11.264 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:11.264 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:11.264 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:11.264 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:11.264 ++ export QEMU_BIN= 00:14:11.264 ++ QEMU_BIN= 00:14:11.264 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:11.264 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:11.264 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:11.264 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:11.264 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:11.264 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:11.264 ++ '[' 0 -eq 0 ']' 00:14:11.264 ++ export valgrind= 00:14:11.264 ++ valgrind= 00:14:11.264 +++ uname -s 00:14:11.264 ++ '[' Linux = Linux ']' 00:14:11.264 ++ HUGEMEM=4096 00:14:11.264 ++ export CLEAR_HUGE=yes 00:14:11.264 ++ CLEAR_HUGE=yes 00:14:11.264 ++ [[ 0 -eq 1 ]] 00:14:11.264 ++ [[ 0 -eq 1 ]] 00:14:11.264 ++ MAKE=make 00:14:11.264 +++ nproc 00:14:11.264 ++ MAKEFLAGS=-j10 00:14:11.264 ++ export HUGEMEM=4096 00:14:11.264 ++ HUGEMEM=4096 00:14:11.264 ++ NO_HUGE=() 00:14:11.264 ++ TEST_MODE= 00:14:11.264 ++ [[ -z '' ]] 00:14:11.264 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:14:11.264 ++ exec 00:14:11.264 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:14:11.264 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:14:11.264 ++ set_test_storage 2147483648 00:14:11.264 ++ [[ -v testdir ]] 00:14:11.264 ++ local requested_size=2147483648 00:14:11.264 ++ local mount target_dir 00:14:11.264 ++ local -A mounts fss sizes avails uses 00:14:11.264 ++ local source fs size avail mount use 00:14:11.264 ++ local storage_fallback storage_candidates 00:14:11.264 +++ mktemp -udt spdk.XXXXXX 00:14:11.264 ++ storage_fallback=/tmp/spdk.ANe13r 00:14:11.264 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:11.264 ++ [[ -n '' ]] 00:14:11.264 ++ [[ -n '' ]] 00:14:11.264 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.ANe13r/tests/unit /tmp/spdk.ANe13r 00:14:11.264 ++ requested_size=2214592512 00:14:11.264 ++ read -r source fs size use avail _ mount 00:14:11.264 +++ df -T 00:14:11.264 +++ grep -v Filesystem 00:14:11.264 ++ mounts["$mount"]=tmpfs 00:14:11.264 ++ fss["$mount"]=tmpfs 00:14:11.264 ++ avails["$mount"]=1252610048 00:14:11.264 ++ sizes["$mount"]=1253683200 00:14:11.264 ++ uses["$mount"]=1073152 00:14:11.264 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=/dev/vda1 00:14:11.265 ++ fss["$mount"]=ext4 00:14:11.265 ++ avails["$mount"]=10382663680 00:14:11.265 ++ sizes["$mount"]=20616794112 00:14:11.265 ++ uses["$mount"]=10217353216 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=tmpfs 00:14:11.265 ++ fss["$mount"]=tmpfs 00:14:11.265 ++ avails["$mount"]=6268395520 00:14:11.265 ++ sizes["$mount"]=6268395520 00:14:11.265 ++ uses["$mount"]=0 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=tmpfs 00:14:11.265 ++ fss["$mount"]=tmpfs 00:14:11.265 ++ avails["$mount"]=5242880 00:14:11.265 ++ sizes["$mount"]=5242880 00:14:11.265 ++ uses["$mount"]=0 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=/dev/vda15 00:14:11.265 ++ fss["$mount"]=vfat 00:14:11.265 ++ avails["$mount"]=103061504 00:14:11.265 ++ sizes["$mount"]=109395968 00:14:11.265 ++ uses["$mount"]=6334464 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=tmpfs 00:14:11.265 ++ fss["$mount"]=tmpfs 00:14:11.265 ++ avails["$mount"]=1253675008 00:14:11.265 ++ sizes["$mount"]=1253679104 00:14:11.265 ++ uses["$mount"]=4096 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:14:11.265 ++ fss["$mount"]=fuse.sshfs 00:14:11.265 ++ avails["$mount"]=94117240832 00:14:11.265 ++ sizes["$mount"]=105088212992 00:14:11.265 ++ uses["$mount"]=5585539072 00:14:11.265 ++ read -r source fs size use avail _ mount 00:14:11.265 ++ printf '* Looking for test storage...\n' 00:14:11.265 * Looking for test storage... 00:14:11.265 ++ local target_space new_size 00:14:11.265 ++ for target_dir in "${storage_candidates[@]}" 00:14:11.265 +++ awk '$1 !~ /Filesystem/{print $6}' 00:14:11.265 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:14:11.265 ++ mount=/ 00:14:11.265 ++ target_space=10382663680 00:14:11.265 ++ (( target_space == 0 || target_space < requested_size )) 00:14:11.265 ++ (( target_space >= requested_size )) 00:14:11.265 ++ [[ ext4 == tmpfs ]] 00:14:11.265 ++ [[ ext4 == ramfs ]] 00:14:11.265 ++ [[ / == / ]] 00:14:11.265 ++ new_size=12431945728 00:14:11.265 ++ (( new_size * 100 / sizes[/] > 95 )) 00:14:11.265 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:14:11.265 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:14:11.265 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:14:11.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:14:11.265 ++ return 0 00:14:11.265 ++ set -o errtrace 00:14:11.265 ++ shopt -s extdebug 00:14:11.265 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:14:11.265 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:11.265 01:44:11 -- common/autotest_common.sh@1673 -- # true 00:14:11.265 01:44:11 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:14:11.265 01:44:11 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:14:11.265 01:44:11 -- common/autotest_common.sh@29 -- # exec 00:14:11.265 01:44:11 -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:11.265 01:44:11 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:11.265 01:44:11 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:11.265 01:44:11 -- common/autotest_common.sh@18 -- # set -x 00:14:11.265 01:44:11 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:14:11.265 01:44:11 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:14:11.265 01:44:11 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:14:11.265 01:44:11 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:14:11.265 01:44:11 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:14:11.265 01:44:11 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:14:11.265 01:44:11 -- unit/unittest.sh@179 -- # hash lcov 00:14:11.265 01:44:11 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:14:11.265 01:44:11 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:14:11.265 01:44:11 -- unit/unittest.sh@180 -- # cov_avail=yes 00:14:11.265 01:44:11 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:14:11.265 01:44:11 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:14:11.265 01:44:11 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:14:11.265 01:44:11 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:14:11.265 01:44:11 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:14:11.265 --rc lcov_branch_coverage=1 00:14:11.265 --rc lcov_function_coverage=1 00:14:11.265 --rc genhtml_branch_coverage=1 00:14:11.265 --rc genhtml_function_coverage=1 00:14:11.265 --rc genhtml_legend=1 00:14:11.265 --rc geninfo_all_blocks=1 00:14:11.265 ' 00:14:11.265 01:44:11 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:14:11.265 --rc lcov_branch_coverage=1 00:14:11.265 --rc lcov_function_coverage=1 00:14:11.265 --rc genhtml_branch_coverage=1 00:14:11.265 --rc genhtml_function_coverage=1 00:14:11.265 --rc genhtml_legend=1 00:14:11.265 --rc geninfo_all_blocks=1 00:14:11.265 ' 00:14:11.265 01:44:11 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:14:11.265 --rc lcov_branch_coverage=1 00:14:11.265 --rc lcov_function_coverage=1 00:14:11.265 --rc genhtml_branch_coverage=1 00:14:11.265 --rc genhtml_function_coverage=1 00:14:11.265 --rc genhtml_legend=1 00:14:11.265 --rc geninfo_all_blocks=1 00:14:11.265 --no-external' 00:14:11.265 01:44:11 -- unit/unittest.sh@200 -- # LCOV='lcov 00:14:11.265 --rc lcov_branch_coverage=1 00:14:11.265 --rc lcov_function_coverage=1 00:14:11.265 --rc genhtml_branch_coverage=1 00:14:11.265 --rc genhtml_function_coverage=1 00:14:11.265 --rc genhtml_legend=1 00:14:11.265 --rc geninfo_all_blocks=1 00:14:11.265 --no-external' 00:14:11.265 01:44:11 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:14:17.834 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:17.834 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:30.124 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:14:30.124 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:14:30.124 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:14:30.124 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:14:30.124 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:14:30.124 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:56.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:56.698 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:56.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:56.699 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:56.700 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:56.700 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:58.613 01:44:58 -- unit/unittest.sh@206 -- # uname -m 00:14:58.613 01:44:58 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:14:58.613 01:44:58 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:14:58.613 01:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.613 01:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.613 01:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 ************************************ 00:14:58.613 START TEST unittest_pci_event 00:14:58.613 ************************************ 00:14:58.613 01:44:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:14:58.613 00:14:58.613 00:14:58.613 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.613 http://cunit.sourceforge.net/ 00:14:58.613 00:14:58.613 00:14:58.613 Suite: pci_event 00:14:58.613 Test: test_pci_parse_event ...[2024-04-24 01:44:58.471278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:14:58.613 [2024-04-24 01:44:58.471940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:14:58.613 passed 00:14:58.613 00:14:58.613 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.614 suites 1 1 n/a 0 0 00:14:58.614 tests 1 1 1 0 0 00:14:58.614 asserts 15 15 15 0 n/a 00:14:58.614 00:14:58.614 Elapsed time = 0.001 seconds 00:14:58.614 00:14:58.614 real 0m0.041s 00:14:58.614 user 0m0.014s 00:14:58.614 sys 0m0.022s 00:14:58.614 01:44:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.614 01:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:58.614 ************************************ 00:14:58.614 END TEST unittest_pci_event 00:14:58.614 ************************************ 00:14:58.614 01:44:58 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:14:58.614 01:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.614 01:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.614 01:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:58.614 ************************************ 00:14:58.614 START TEST unittest_include 00:14:58.614 ************************************ 00:14:58.614 01:44:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:14:58.614 00:14:58.614 00:14:58.614 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.614 http://cunit.sourceforge.net/ 00:14:58.614 00:14:58.614 00:14:58.614 Suite: histogram 00:14:58.614 Test: histogram_test ...passed 00:14:58.614 Test: histogram_merge ...passed 00:14:58.614 00:14:58.614 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.614 suites 1 1 n/a 0 0 00:14:58.614 tests 2 2 2 0 0 00:14:58.614 asserts 50 50 50 0 n/a 00:14:58.614 00:14:58.614 Elapsed time = 0.007 seconds 00:14:58.614 00:14:58.614 real 0m0.037s 00:14:58.614 user 0m0.028s 00:14:58.614 sys 0m0.009s 00:14:58.614 01:44:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.614 01:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:58.614 ************************************ 00:14:58.614 END TEST unittest_include 00:14:58.614 ************************************ 00:14:58.614 01:44:58 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:14:58.614 01:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.614 01:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.614 01:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:58.888 ************************************ 00:14:58.888 START TEST unittest_bdev 00:14:58.888 ************************************ 00:14:58.888 01:44:58 -- common/autotest_common.sh@1111 -- # unittest_bdev 00:14:58.888 01:44:58 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:14:58.888 00:14:58.888 00:14:58.888 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.888 http://cunit.sourceforge.net/ 00:14:58.888 00:14:58.888 00:14:58.888 Suite: bdev 00:14:58.888 Test: bytes_to_blocks_test ...passed 00:14:58.888 Test: num_blocks_test ...passed 00:14:58.888 Test: io_valid_test ...passed 00:14:58.888 Test: open_write_test ...[2024-04-24 01:44:58.816678] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:14:58.888 [2024-04-24 01:44:58.816957] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:14:58.888 [2024-04-24 01:44:58.817055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:14:58.888 passed 00:14:58.888 Test: claim_test ...passed 00:14:58.888 Test: alias_add_del_test ...[2024-04-24 01:44:58.911127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4552:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:14:58.888 [2024-04-24 01:44:58.911288] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4582:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:14:58.888 [2024-04-24 01:44:58.911360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4552:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:14:58.888 passed 00:14:58.888 Test: get_device_stat_test ...passed 00:14:59.148 Test: bdev_io_types_test ...passed 00:14:59.148 Test: bdev_io_wait_test ...passed 00:14:59.148 Test: bdev_io_spans_split_test ...passed 00:14:59.148 Test: bdev_io_boundary_split_test ...passed 00:14:59.148 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-24 01:44:59.123456] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3188:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:14:59.148 passed 00:14:59.148 Test: bdev_io_mix_split_test ...passed 00:14:59.406 Test: bdev_io_split_with_io_wait ...passed 00:14:59.406 Test: bdev_io_write_unit_split_test ...[2024-04-24 01:44:59.271232] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:14:59.406 [2024-04-24 01:44:59.271370] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:14:59.406 [2024-04-24 01:44:59.271406] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:14:59.407 [2024-04-24 01:44:59.271451] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:14:59.407 passed 00:14:59.407 Test: bdev_io_alignment_with_boundary ...passed 00:14:59.407 Test: bdev_io_alignment ...passed 00:14:59.407 Test: bdev_histograms ...passed 00:14:59.665 Test: bdev_write_zeroes ...passed 00:14:59.665 Test: bdev_compare_and_write ...passed 00:14:59.665 Test: bdev_compare ...passed 00:14:59.665 Test: bdev_compare_emulated ...passed 00:14:59.925 Test: bdev_zcopy_write ...passed 00:14:59.925 Test: bdev_zcopy_read ...passed 00:14:59.925 Test: bdev_open_while_hotremove ...passed 00:14:59.925 Test: bdev_close_while_hotremove ...passed 00:14:59.925 Test: bdev_open_ext_test ...passed 00:14:59.925 Test: bdev_open_ext_unregister ...[2024-04-24 01:44:59.827360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8100:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:14:59.925 [2024-04-24 01:44:59.827521] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8100:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:14:59.925 passed 00:14:59.925 Test: bdev_set_io_timeout ...passed 00:14:59.925 Test: bdev_set_qd_sampling ...passed 00:14:59.925 Test: lba_range_overlap ...passed 00:14:59.925 Test: lock_lba_range_check_ranges ...passed 00:15:00.184 Test: lock_lba_range_with_io_outstanding ...passed 00:15:00.184 Test: lock_lba_range_overlapped ...passed 00:15:00.184 Test: bdev_quiesce ...[2024-04-24 01:45:00.087711] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10023:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:15:00.184 passed 00:15:00.184 Test: bdev_io_abort ...passed 00:15:00.184 Test: bdev_unmap ...passed 00:15:00.184 Test: bdev_write_zeroes_split_test ...passed 00:15:00.184 Test: bdev_set_options_test ...passed 00:15:00.184 Test: bdev_get_memory_domains ...passed 00:15:00.184 Test: bdev_io_ext ...[2024-04-24 01:45:00.260702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:15:00.443 passed 00:15:00.443 Test: bdev_io_ext_no_opts ...passed 00:15:00.443 Test: bdev_io_ext_invalid_opts ...passed 00:15:00.443 Test: bdev_io_ext_split ...passed 00:15:00.443 Test: bdev_io_ext_bounce_buffer ...passed 00:15:00.443 Test: bdev_register_uuid_alias ...[2024-04-24 01:45:00.525070] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4552:bdev_name_add: *ERROR*: Bdev name 19af2485-d817-455a-9dd2-420bff5fd989 already exists 00:15:00.443 [2024-04-24 01:45:00.525137] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7655:bdev_register: *ERROR*: Unable to add uuid:19af2485-d817-455a-9dd2-420bff5fd989 alias for bdev bdev0 00:15:00.702 passed 00:15:00.702 Test: bdev_unregister_by_name ...[2024-04-24 01:45:00.548580] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7890:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:15:00.702 [2024-04-24 01:45:00.548641] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7898:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:15:00.702 passed 00:15:00.702 Test: for_each_bdev_test ...passed 00:15:00.702 Test: bdev_seek_test ...passed 00:15:00.702 Test: bdev_copy ...passed 00:15:00.702 Test: bdev_copy_split_test ...passed 00:15:00.702 Test: examine_locks ...passed 00:15:00.702 Test: claim_v2_rwo ...[2024-04-24 01:45:00.689733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.689810] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8624:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.689825] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.702 passed 00:15:00.702 Test: claim_v2_rom ...[2024-04-24 01:45:00.689890] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.689907] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.689949] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8619:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:15:00.702 [2024-04-24 01:45:00.690083] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690131] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690176] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8662:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:15:00.702 passed 00:15:00.702 Test: claim_v2_rwm ...[2024-04-24 01:45:00.690275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8657:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:00.702 [2024-04-24 01:45:00.690369] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8692:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:15:00.702 [2024-04-24 01:45:00.690415] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7994:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:00.702 passed 00:15:00.702 Test: claim_v2_existing_writer ...[2024-04-24 01:45:00.690462] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690479] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690507] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8712:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:15:00.702 [2024-04-24 01:45:00.690551] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8692:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:15:00.702 [2024-04-24 01:45:00.690694] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8657:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:00.703 [2024-04-24 01:45:00.690721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8657:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:00.703 passed 00:15:00.703 Test: claim_v2_existing_v1 ...passed 00:15:00.703 Test: claim_v1_existing_v2 ...[2024-04-24 01:45:00.690819] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:00.703 [2024-04-24 01:45:00.690846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:00.703 [2024-04-24 01:45:00.690863] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:00.703 passed 00:15:00.703 Test: examine_claimed ...[2024-04-24 01:45:00.690972] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:00.703 [2024-04-24 01:45:00.691019] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:00.703 [2024-04-24 01:45:00.691049] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8461:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:00.703 [2024-04-24 01:45:00.691290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8789:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:15:00.703 passed 00:15:00.703 00:15:00.703 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.703 suites 1 1 n/a 0 0 00:15:00.703 tests 59 59 59 0 0 00:15:00.703 asserts 4599 4599 4599 0 n/a 00:15:00.703 00:15:00.703 Elapsed time = 1.947 seconds 00:15:00.703 01:45:00 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:15:00.703 00:15:00.703 00:15:00.703 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.703 http://cunit.sourceforge.net/ 00:15:00.703 00:15:00.703 00:15:00.703 Suite: nvme 00:15:00.703 Test: test_create_ctrlr ...passed 00:15:00.703 Test: test_reset_ctrlr ...[2024-04-24 01:45:00.757872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 passed 00:15:00.703 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:15:00.703 Test: test_failover_ctrlr ...passed 00:15:00.703 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-24 01:45:00.761331] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 [2024-04-24 01:45:00.761761] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 [2024-04-24 01:45:00.762116] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 passed 00:15:00.703 Test: test_pending_reset ...[2024-04-24 01:45:00.764107] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 [2024-04-24 01:45:00.764587] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 passed 00:15:00.703 Test: test_attach_ctrlr ...[2024-04-24 01:45:00.766210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:00.703 passed 00:15:00.703 Test: test_aer_cb ...passed 00:15:00.703 Test: test_submit_nvme_cmd ...passed 00:15:00.703 Test: test_add_remove_trid ...passed 00:15:00.703 Test: test_abort ...[2024-04-24 01:45:00.770808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7388:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:15:00.703 passed 00:15:00.703 Test: test_get_io_qpair ...passed 00:15:00.703 Test: test_bdev_unregister ...passed 00:15:00.703 Test: test_compare_ns ...passed 00:15:00.703 Test: test_init_ana_log_page ...passed 00:15:00.703 Test: test_get_memory_domains ...passed 00:15:00.703 Test: test_reconnect_qpair ...[2024-04-24 01:45:00.774852] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.703 passed 00:15:00.703 Test: test_create_bdev_ctrlr ...[2024-04-24 01:45:00.775842] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5336:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:15:00.703 passed 00:15:00.703 Test: test_add_multi_ns_to_bdev ...[2024-04-24 01:45:00.777718] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4528:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:15:00.703 passed 00:15:00.703 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:15:00.703 Test: test_admin_path ...passed 00:15:00.703 Test: test_reset_bdev_ctrlr ...passed 00:15:00.703 Test: test_find_io_path ...passed 00:15:00.703 Test: test_retry_io_if_ana_state_is_updating ...passed 00:15:00.703 Test: test_retry_io_for_io_path_error ...passed 00:15:00.703 Test: test_retry_io_count ...passed 00:15:00.703 Test: test_concurrent_read_ana_log_page ...passed 00:15:00.703 Test: test_retry_io_for_ana_error ...passed 00:15:00.703 Test: test_check_io_error_resiliency_params ...[2024-04-24 01:45:00.786703] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6018:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:15:00.703 [2024-04-24 01:45:00.786785] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6022:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:15:00.703 [2024-04-24 01:45:00.786827] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6031:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:15:00.703 [2024-04-24 01:45:00.786868] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6034:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:15:00.703 [2024-04-24 01:45:00.787212] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6046:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:15:00.963 [2024-04-24 01:45:00.787344] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6046:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:15:00.963 [2024-04-24 01:45:00.787378] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6026:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:15:00.963 [2024-04-24 01:45:00.787752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6041:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:15:00.963 [2024-04-24 01:45:00.787794] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6038:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:15:00.963 passed 00:15:00.963 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:15:00.963 Test: test_reconnect_ctrlr ...[2024-04-24 01:45:00.789070] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.789217] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.789805] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.789936] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.790171] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 passed 00:15:00.964 Test: test_retry_failover_ctrlr ...[2024-04-24 01:45:00.791207] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 passed 00:15:00.964 Test: test_fail_path ...[2024-04-24 01:45:00.792149] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.792451] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.792602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.792877] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.793353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 passed 00:15:00.964 Test: test_nvme_ns_cmp ...passed 00:15:00.964 Test: test_ana_transition ...passed 00:15:00.964 Test: test_set_preferred_path ...passed 00:15:00.964 Test: test_find_next_io_path ...passed 00:15:00.964 Test: test_find_io_path_min_qd ...passed 00:15:00.964 Test: test_disable_auto_failback ...[2024-04-24 01:45:00.795805] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 passed 00:15:00.964 Test: test_set_multipath_policy ...passed 00:15:00.964 Test: test_uuid_generation ...passed 00:15:00.964 Test: test_retry_io_to_same_path ...passed 00:15:00.964 Test: test_race_between_reset_and_disconnected ...passed 00:15:00.964 Test: test_ctrlr_op_rpc ...passed 00:15:00.964 Test: test_bdev_ctrlr_op_rpc ...passed 00:15:00.964 Test: test_disable_enable_ctrlr ...[2024-04-24 01:45:00.800917] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 [2024-04-24 01:45:00.801268] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:00.964 passed 00:15:00.964 Test: test_delete_ctrlr_done ...passed 00:15:00.964 Test: test_ns_remove_during_reset ...passed 00:15:00.964 00:15:00.964 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.964 suites 1 1 n/a 0 0 00:15:00.964 tests 48 48 48 0 0 00:15:00.964 asserts 3565 3565 3565 0 n/a 00:15:00.964 00:15:00.964 Elapsed time = 0.047 seconds 00:15:00.964 01:45:00 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:15:00.964 00:15:00.964 00:15:00.964 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.964 http://cunit.sourceforge.net/ 00:15:00.964 00:15:00.964 Test Options 00:15:00.964 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:15:00.964 00:15:00.964 Suite: raid 00:15:00.964 Test: test_create_raid ...passed 00:15:00.964 Test: test_create_raid_superblock ...passed 00:15:00.964 Test: test_delete_raid ...passed 00:15:00.964 Test: test_create_raid_invalid_args ...[2024-04-24 01:45:00.855231] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:15:00.964 [2024-04-24 01:45:00.855802] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:15:00.964 [2024-04-24 01:45:00.856386] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:15:00.964 [2024-04-24 01:45:00.856681] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:00.964 [2024-04-24 01:45:00.857630] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:00.964 passed 00:15:00.964 Test: test_delete_raid_invalid_args ...passed 00:15:00.964 Test: test_io_channel ...passed 00:15:00.964 Test: test_reset_io ...passed 00:15:00.964 Test: test_write_io ...passed 00:15:00.964 Test: test_read_io ...passed 00:15:02.344 Test: test_unmap_io ...passed 00:15:02.344 Test: test_io_failure ...[2024-04-24 01:45:02.092277] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:15:02.344 passed 00:15:02.344 Test: test_multi_raid_no_io ...passed 00:15:02.344 Test: test_multi_raid_with_io ...passed 00:15:02.344 Test: test_io_type_supported ...passed 00:15:02.344 Test: test_raid_json_dump_info ...passed 00:15:02.344 Test: test_context_size ...passed 00:15:02.344 Test: test_raid_level_conversions ...passed 00:15:02.344 Test: test_raid_io_split ...passedTest Options 00:15:02.344 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:15:02.344 00:15:02.344 Suite: raid_dif 00:15:02.344 Test: test_create_raid ...passed 00:15:02.344 Test: test_create_raid_superblock ...passed 00:15:02.344 Test: test_delete_raid ...passed 00:15:02.344 Test: test_create_raid_invalid_args ...[2024-04-24 01:45:02.100616] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:15:02.344 [2024-04-24 01:45:02.100742] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:15:02.344 [2024-04-24 01:45:02.101096] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:15:02.344 [2024-04-24 01:45:02.101219] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:02.344 [2024-04-24 01:45:02.101932] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:02.344 passed 00:15:02.344 Test: test_delete_raid_invalid_args ...passed 00:15:02.344 Test: test_io_channel ...passed 00:15:02.344 Test: test_reset_io ...passed 00:15:02.344 Test: test_write_io ...passed 00:15:02.344 Test: test_read_io ...passed 00:15:03.290 Test: test_unmap_io ...passed 00:15:03.290 Test: test_io_failure ...[2024-04-24 01:45:03.247437] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:15:03.290 passed 00:15:03.290 Test: test_multi_raid_no_io ...passed 00:15:03.290 Test: test_multi_raid_with_io ...passed 00:15:03.290 Test: test_io_type_supported ...passed 00:15:03.290 Test: test_raid_json_dump_info ...passed 00:15:03.290 Test: test_context_size ...passed 00:15:03.290 Test: test_raid_level_conversions ...passed 00:15:03.290 Test: test_raid_io_split ...passedTest Options 00:15:03.290 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:15:03.290 00:15:03.290 Suite: raid_single_run 00:15:03.290 Test: test_raid_process ...passed 00:15:03.290 00:15:03.290 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.290 suites 3 3 n/a 0 0 00:15:03.290 tests 37 37 37 0 0 00:15:03.290 asserts 355354 355354 355354 0 n/a 00:15:03.290 00:15:03.290 Elapsed time = 2.405 seconds 00:15:03.290 01:45:03 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:15:03.290 00:15:03.290 00:15:03.290 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.290 http://cunit.sourceforge.net/ 00:15:03.290 00:15:03.290 00:15:03.290 Suite: raid_sb 00:15:03.290 Test: test_raid_bdev_write_superblock ...passed 00:15:03.290 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:15:03.290 Test: test_raid_bdev_parse_superblock ...[2024-04-24 01:45:03.307297] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:03.290 passed 00:15:03.290 Suite: raid_sb_md 00:15:03.291 Test: test_raid_bdev_write_superblock ...passed 00:15:03.291 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:15:03.291 Test: test_raid_bdev_parse_superblock ...[2024-04-24 01:45:03.307791] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:03.291 passed 00:15:03.291 Suite: raid_sb_md_interleaved 00:15:03.291 Test: test_raid_bdev_write_superblock ...passed 00:15:03.291 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:15:03.291 Test: test_raid_bdev_parse_superblock ...[2024-04-24 01:45:03.308081] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:03.291 passed 00:15:03.291 00:15:03.291 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.291 suites 3 3 n/a 0 0 00:15:03.291 tests 9 9 9 0 0 00:15:03.291 asserts 136 136 136 0 n/a 00:15:03.291 00:15:03.291 Elapsed time = 0.002 seconds 00:15:03.291 01:45:03 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:15:03.291 00:15:03.291 00:15:03.291 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.291 http://cunit.sourceforge.net/ 00:15:03.291 00:15:03.291 00:15:03.291 Suite: concat 00:15:03.291 Test: test_concat_start ...passed 00:15:03.291 Test: test_concat_rw ...passed 00:15:03.291 Test: test_concat_null_payload ...passed 00:15:03.291 00:15:03.291 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.291 suites 1 1 n/a 0 0 00:15:03.291 tests 3 3 3 0 0 00:15:03.291 asserts 8460 8460 8460 0 n/a 00:15:03.291 00:15:03.291 Elapsed time = 0.006 seconds 00:15:03.291 01:45:03 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:15:03.551 00:15:03.551 00:15:03.551 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.551 http://cunit.sourceforge.net/ 00:15:03.551 00:15:03.551 00:15:03.551 Suite: raid1 00:15:03.551 Test: test_raid1_start ...passed 00:15:03.551 Test: test_raid1_read_balancing ...passed 00:15:03.551 00:15:03.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.551 suites 1 1 n/a 0 0 00:15:03.551 tests 2 2 2 0 0 00:15:03.551 asserts 2880 2880 2880 0 n/a 00:15:03.551 00:15:03.551 Elapsed time = 0.005 seconds 00:15:03.551 01:45:03 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:15:03.551 00:15:03.551 00:15:03.551 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.551 http://cunit.sourceforge.net/ 00:15:03.551 00:15:03.551 00:15:03.551 Suite: zone 00:15:03.551 Test: test_zone_get_operation ...passed 00:15:03.551 Test: test_bdev_zone_get_info ...passed 00:15:03.551 Test: test_bdev_zone_management ...passed 00:15:03.551 Test: test_bdev_zone_append ...passed 00:15:03.551 Test: test_bdev_zone_append_with_md ...passed 00:15:03.551 Test: test_bdev_zone_appendv ...passed 00:15:03.551 Test: test_bdev_zone_appendv_with_md ...passed 00:15:03.551 Test: test_bdev_io_get_append_location ...passed 00:15:03.551 00:15:03.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.551 suites 1 1 n/a 0 0 00:15:03.551 tests 8 8 8 0 0 00:15:03.551 asserts 94 94 94 0 n/a 00:15:03.551 00:15:03.551 Elapsed time = 0.000 seconds 00:15:03.551 01:45:03 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:15:03.551 00:15:03.551 00:15:03.551 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.551 http://cunit.sourceforge.net/ 00:15:03.551 00:15:03.551 00:15:03.551 Suite: gpt_parse 00:15:03.551 Test: test_parse_mbr_and_primary ...[2024-04-24 01:45:03.478430] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:03.551 [2024-04-24 01:45:03.478917] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:03.551 [2024-04-24 01:45:03.479019] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:15:03.551 [2024-04-24 01:45:03.479158] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:15:03.551 [2024-04-24 01:45:03.479214] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:15:03.551 [2024-04-24 01:45:03.479334] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:15:03.551 passed 00:15:03.551 Test: test_parse_secondary ...[2024-04-24 01:45:03.480094] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:15:03.551 [2024-04-24 01:45:03.480348] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:15:03.551 [2024-04-24 01:45:03.480407] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:15:03.551 [2024-04-24 01:45:03.480456] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:15:03.551 passed 00:15:03.551 Test: test_check_mbr ...[2024-04-24 01:45:03.481204] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:03.551 passed 00:15:03.551 Test: test_read_header ...[2024-04-24 01:45:03.481271] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:03.551 [2024-04-24 01:45:03.481355] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:15:03.551 [2024-04-24 01:45:03.481472] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:15:03.551 [2024-04-24 01:45:03.481566] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:15:03.551 [2024-04-24 01:45:03.481620] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:15:03.551 passed 00:15:03.551 Test: test_read_partitions ...[2024-04-24 01:45:03.481672] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:15:03.551 [2024-04-24 01:45:03.481721] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:15:03.551 [2024-04-24 01:45:03.481794] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:15:03.551 [2024-04-24 01:45:03.481857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:15:03.551 [2024-04-24 01:45:03.481908] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:15:03.551 [2024-04-24 01:45:03.481949] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:15:03.551 [2024-04-24 01:45:03.482336] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:15:03.551 passed 00:15:03.551 00:15:03.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.551 suites 1 1 n/a 0 0 00:15:03.551 tests 5 5 5 0 0 00:15:03.551 asserts 33 33 33 0 n/a 00:15:03.551 00:15:03.551 Elapsed time = 0.005 seconds 00:15:03.551 01:45:03 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:15:03.551 00:15:03.551 00:15:03.551 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.551 http://cunit.sourceforge.net/ 00:15:03.551 00:15:03.551 00:15:03.551 Suite: bdev_part 00:15:03.551 Test: part_test ...[2024-04-24 01:45:03.532058] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4552:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:15:03.551 passed 00:15:03.551 Test: part_free_test ...passed 00:15:03.551 Test: part_get_io_channel_test ...passed 00:15:03.551 Test: part_construct_ext ...passed 00:15:03.551 00:15:03.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.551 suites 1 1 n/a 0 0 00:15:03.551 tests 4 4 4 0 0 00:15:03.551 asserts 48 48 48 0 n/a 00:15:03.551 00:15:03.551 Elapsed time = 0.068 seconds 00:15:03.551 01:45:03 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:15:03.810 00:15:03.810 00:15:03.810 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.810 http://cunit.sourceforge.net/ 00:15:03.810 00:15:03.810 00:15:03.810 Suite: scsi_nvme_suite 00:15:03.810 Test: scsi_nvme_translate_test ...passed 00:15:03.810 00:15:03.810 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.810 suites 1 1 n/a 0 0 00:15:03.810 tests 1 1 1 0 0 00:15:03.810 asserts 104 104 104 0 n/a 00:15:03.810 00:15:03.810 Elapsed time = 0.000 seconds 00:15:03.810 01:45:03 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:15:03.810 00:15:03.810 00:15:03.810 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.810 http://cunit.sourceforge.net/ 00:15:03.810 00:15:03.810 00:15:03.810 Suite: lvol 00:15:03.810 Test: ut_lvs_init ...[2024-04-24 01:45:03.691455] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:15:03.810 [2024-04-24 01:45:03.691963] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:15:03.810 passed 00:15:03.810 Test: ut_lvol_init ...passed 00:15:03.811 Test: ut_lvol_snapshot ...passed 00:15:03.811 Test: ut_lvol_clone ...passed 00:15:03.811 Test: ut_lvs_destroy ...passed 00:15:03.811 Test: ut_lvs_unload ...passed 00:15:03.811 Test: ut_lvol_resize ...[2024-04-24 01:45:03.693988] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:15:03.811 passed 00:15:03.811 Test: ut_lvol_set_read_only ...passed 00:15:03.811 Test: ut_lvol_hotremove ...passed 00:15:03.811 Test: ut_vbdev_lvol_get_io_channel ...passed 00:15:03.811 Test: ut_vbdev_lvol_io_type_supported ...passed 00:15:03.811 Test: ut_lvol_read_write ...passed 00:15:03.811 Test: ut_vbdev_lvol_submit_request ...passed 00:15:03.811 Test: ut_lvol_examine_config ...passed 00:15:03.811 Test: ut_lvol_examine_disk ...[2024-04-24 01:45:03.694901] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:15:03.811 passed 00:15:03.811 Test: ut_lvol_rename ...[2024-04-24 01:45:03.696185] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:15:03.811 [2024-04-24 01:45:03.696327] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:15:03.811 passed 00:15:03.811 Test: ut_bdev_finish ...passed 00:15:03.811 Test: ut_lvs_rename ...passed 00:15:03.811 Test: ut_lvol_seek ...passed 00:15:03.811 Test: ut_esnap_dev_create ...[2024-04-24 01:45:03.697214] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:15:03.811 [2024-04-24 01:45:03.697310] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:15:03.811 passed 00:15:03.811 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-24 01:45:03.697353] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:15:03.811 [2024-04-24 01:45:03.697402] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:15:03.811 [2024-04-24 01:45:03.697584] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:15:03.811 [2024-04-24 01:45:03.697636] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:15:03.811 passed 00:15:03.811 00:15:03.811 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.811 suites 1 1 n/a 0 0 00:15:03.811 tests 21 21 21 0 0 00:15:03.811 asserts 758 758 758 0 n/a 00:15:03.811 00:15:03.811 Elapsed time = 0.007 seconds 00:15:03.811 01:45:03 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:15:03.811 00:15:03.811 00:15:03.811 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.811 http://cunit.sourceforge.net/ 00:15:03.811 00:15:03.811 00:15:03.811 Suite: zone_block 00:15:03.811 Test: test_zone_block_create ...passed 00:15:03.811 Test: test_zone_block_create_invalid ...[2024-04-24 01:45:03.772743] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:15:03.811 [2024-04-24 01:45:03.773191] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-24 01:45:03.773423] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:15:03.811 [2024-04-24 01:45:03.773525] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-24 01:45:03.773732] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:15:03.811 [2024-04-24 01:45:03.773804] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-24 01:45:03.773939] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:15:03.811 [2024-04-24 01:45:03.774001] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:15:03.811 Test: test_get_zone_info ...[2024-04-24 01:45:03.774763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.774856] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.774929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_supported_io_types ...passed 00:15:03.811 Test: test_reset_zone ...[2024-04-24 01:45:03.775954] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.776025] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_open_zone ...[2024-04-24 01:45:03.776596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.777427] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.777532] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_zone_write ...[2024-04-24 01:45:03.778149] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:15:03.811 [2024-04-24 01:45:03.778237] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.778294] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:15:03.811 [2024-04-24 01:45:03.778359] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.785773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:15:03.811 [2024-04-24 01:45:03.785860] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.785947] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:15:03.811 [2024-04-24 01:45:03.785981] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.792929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:15:03.811 [2024-04-24 01:45:03.793043] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_zone_read ...[2024-04-24 01:45:03.793619] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:15:03.811 [2024-04-24 01:45:03.793679] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.793764] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:15:03.811 [2024-04-24 01:45:03.793812] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.794371] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:15:03.811 [2024-04-24 01:45:03.794433] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_close_zone ...[2024-04-24 01:45:03.794906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.795038] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.795292] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.795357] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.811 Test: test_finish_zone ...[2024-04-24 01:45:03.796103] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 [2024-04-24 01:45:03.796195] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.811 passed 00:15:03.812 Test: test_append_zone ...[2024-04-24 01:45:03.796647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:15:03.812 [2024-04-24 01:45:03.796695] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.812 [2024-04-24 01:45:03.796768] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:15:03.812 [2024-04-24 01:45:03.796801] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.812 [2024-04-24 01:45:03.810336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:15:03.812 [2024-04-24 01:45:03.810426] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:03.812 passed 00:15:03.812 00:15:03.812 Run Summary: Type Total Ran Passed Failed Inactive 00:15:03.812 suites 1 1 n/a 0 0 00:15:03.812 tests 11 11 11 0 0 00:15:03.812 asserts 3437 3437 3437 0 n/a 00:15:03.812 00:15:03.812 Elapsed time = 0.040 seconds 00:15:03.812 01:45:03 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:15:04.071 00:15:04.071 00:15:04.071 CUnit - A unit testing framework for C - Version 2.1-3 00:15:04.071 http://cunit.sourceforge.net/ 00:15:04.071 00:15:04.071 00:15:04.071 Suite: bdev 00:15:04.071 Test: basic ...[2024-04-24 01:45:03.932703] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d7ea29b921): Operation not permitted (rc=-1) 00:15:04.071 [2024-04-24 01:45:03.933008] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55d7ea29b8e0): Operation not permitted (rc=-1) 00:15:04.071 [2024-04-24 01:45:03.933051] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d7ea29b921): Operation not permitted (rc=-1) 00:15:04.071 passed 00:15:04.071 Test: unregister_and_close ...passed 00:15:04.071 Test: unregister_and_close_different_threads ...passed 00:15:04.071 Test: basic_qos ...passed 00:15:04.329 Test: put_channel_during_reset ...passed 00:15:04.329 Test: aborted_reset ...passed 00:15:04.329 Test: aborted_reset_no_outstanding_io ...passed 00:15:04.329 Test: io_during_reset ...passed 00:15:04.329 Test: reset_completions ...passed 00:15:04.588 Test: io_during_qos_queue ...passed 00:15:04.588 Test: io_during_qos_reset ...passed 00:15:04.588 Test: enomem ...passed 00:15:04.588 Test: enomem_multi_bdev ...passed 00:15:04.588 Test: enomem_multi_bdev_unregister ...passed 00:15:04.845 Test: enomem_multi_io_target ...passed 00:15:04.845 Test: qos_dynamic_enable ...passed 00:15:04.845 Test: bdev_histograms_mt ...passed 00:15:04.845 Test: bdev_set_io_timeout_mt ...[2024-04-24 01:45:04.849797] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:15:04.845 passed 00:15:04.845 Test: lock_lba_range_then_submit_io ...[2024-04-24 01:45:04.874643] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55d7ea29b8a0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:15:04.845 passed 00:15:05.104 Test: unregister_during_reset ...passed 00:15:05.104 Test: event_notify_and_close ...passed 00:15:05.104 Suite: bdev_wrong_thread 00:15:05.104 Test: spdk_bdev_register_wt ...[2024-04-24 01:45:05.009187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8418:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:15:05.104 passed 00:15:05.104 Test: spdk_bdev_examine_wt ...[2024-04-24 01:45:05.009502] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:15:05.104 passed 00:15:05.104 00:15:05.104 Run Summary: Type Total Ran Passed Failed Inactive 00:15:05.104 suites 2 2 n/a 0 0 00:15:05.104 tests 23 23 23 0 0 00:15:05.104 asserts 601 601 601 0 n/a 00:15:05.104 00:15:05.104 Elapsed time = 1.104 seconds 00:15:05.104 00:15:05.104 real 0m6.310s 00:15:05.104 user 0m2.691s 00:15:05.104 sys 0m3.624s 00:15:05.104 01:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.104 01:45:05 -- common/autotest_common.sh@10 -- # set +x 00:15:05.104 ************************************ 00:15:05.104 END TEST unittest_bdev 00:15:05.104 ************************************ 00:15:05.104 01:45:05 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:05.104 01:45:05 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:05.104 01:45:05 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:05.104 01:45:05 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:05.104 01:45:05 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:15:05.104 01:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:05.104 01:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.104 01:45:05 -- common/autotest_common.sh@10 -- # set +x 00:15:05.104 ************************************ 00:15:05.104 START TEST unittest_bdev_raid5f 00:15:05.104 ************************************ 00:15:05.104 01:45:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:15:05.104 00:15:05.104 00:15:05.104 CUnit - A unit testing framework for C - Version 2.1-3 00:15:05.104 http://cunit.sourceforge.net/ 00:15:05.104 00:15:05.104 00:15:05.104 Suite: raid5f 00:15:05.104 Test: test_raid5f_start ...passed 00:15:05.670 Test: test_raid5f_submit_read_request ...passed 00:15:05.929 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:15:09.245 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:15:27.374 Test: test_raid5f_chunk_write_error ...passed 00:15:35.562 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:15:38.874 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:16:10.977 Test: test_raid5f_submit_read_request_degraded ...passed 00:16:10.977 00:16:10.977 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.977 suites 1 1 n/a 0 0 00:16:10.977 tests 8 8 8 0 0 00:16:10.977 asserts 352392 352392 352392 0 n/a 00:16:10.977 00:16:10.977 Elapsed time = 60.299 seconds 00:16:10.977 00:16:10.977 real 1m0.498s 00:16:10.977 user 0m57.158s 00:16:10.977 sys 0m3.239s 00:16:10.977 01:46:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.977 01:46:05 -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 ************************************ 00:16:10.977 END TEST unittest_bdev_raid5f 00:16:10.977 ************************************ 00:16:10.977 01:46:05 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:16:10.977 01:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:10.977 01:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.977 01:46:05 -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 ************************************ 00:16:10.977 START TEST unittest_blob_blobfs 00:16:10.977 ************************************ 00:16:10.977 01:46:05 -- common/autotest_common.sh@1111 -- # unittest_blob 00:16:10.977 01:46:05 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:16:10.977 01:46:05 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:16:10.977 00:16:10.977 00:16:10.977 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.977 http://cunit.sourceforge.net/ 00:16:10.977 00:16:10.977 00:16:10.977 Suite: blob_nocopy_noextent 00:16:10.977 Test: blob_init ...[2024-04-24 01:46:05.787436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:16:10.977 passed 00:16:10.977 Test: blob_thin_provision ...passed 00:16:10.977 Test: blob_read_only ...passed 00:16:10.977 Test: bs_load ...[2024-04-24 01:46:05.903720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:16:10.977 passed 00:16:10.977 Test: bs_load_custom_cluster_size ...passed 00:16:10.977 Test: bs_load_after_failed_grow ...passed 00:16:10.977 Test: bs_cluster_sz ...[2024-04-24 01:46:05.940890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:16:10.977 [2024-04-24 01:46:05.941377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:16:10.977 [2024-04-24 01:46:05.941566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:16:10.977 passed 00:16:10.977 Test: bs_resize_md ...passed 00:16:10.977 Test: bs_destroy ...passed 00:16:10.977 Test: bs_type ...passed 00:16:10.977 Test: bs_super_block ...passed 00:16:10.977 Test: bs_test_recover_cluster_count ...passed 00:16:10.977 Test: bs_grow_live ...passed 00:16:10.977 Test: bs_grow_live_no_space ...passed 00:16:10.977 Test: bs_test_grow ...passed 00:16:10.977 Test: blob_serialize_test ...passed 00:16:10.977 Test: super_block_crc ...passed 00:16:10.977 Test: blob_thin_prov_write_count_io ...passed 00:16:10.977 Test: blob_thin_prov_unmap_cluster ...passed 00:16:10.977 Test: bs_load_iter_test ...passed 00:16:10.977 Test: blob_relations ...[2024-04-24 01:46:06.143714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.977 [2024-04-24 01:46:06.143842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.977 [2024-04-24 01:46:06.144819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.977 [2024-04-24 01:46:06.144880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.977 passed 00:16:10.977 Test: blob_relations2 ...[2024-04-24 01:46:06.159711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.977 [2024-04-24 01:46:06.159805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.977 [2024-04-24 01:46:06.159859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.977 [2024-04-24 01:46:06.159892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.977 [2024-04-24 01:46:06.161434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.977 [2024-04-24 01:46:06.161518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.977 [2024-04-24 01:46:06.161913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.978 [2024-04-24 01:46:06.161967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 passed 00:16:10.978 Test: blob_relations3 ...passed 00:16:10.978 Test: blobstore_clean_power_failure ...passed 00:16:10.978 Test: blob_delete_snapshot_power_failure ...[2024-04-24 01:46:06.317053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:10.978 [2024-04-24 01:46:06.329353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:10.978 [2024-04-24 01:46:06.329445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:10.978 [2024-04-24 01:46:06.329481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:06.341669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:10.978 [2024-04-24 01:46:06.341760] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:10.978 [2024-04-24 01:46:06.341788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:10.978 [2024-04-24 01:46:06.341845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:06.354073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:16:10.978 [2024-04-24 01:46:06.354187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:06.366524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:16:10.978 [2024-04-24 01:46:06.366665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:06.378971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:16:10.978 [2024-04-24 01:46:06.379081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 passed 00:16:10.978 Test: blob_create_snapshot_power_failure ...[2024-04-24 01:46:06.415651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:10.978 [2024-04-24 01:46:06.439840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:10.978 [2024-04-24 01:46:06.452472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:16:10.978 passed 00:16:10.978 Test: blob_io_unit ...passed 00:16:10.978 Test: blob_io_unit_compatibility ...passed 00:16:10.978 Test: blob_ext_md_pages ...passed 00:16:10.978 Test: blob_esnap_io_4096_4096 ...passed 00:16:10.978 Test: blob_esnap_io_512_512 ...passed 00:16:10.978 Test: blob_esnap_io_4096_512 ...passed 00:16:10.978 Test: blob_esnap_io_512_4096 ...passed 00:16:10.978 Suite: blob_bs_nocopy_noextent 00:16:10.978 Test: blob_open ...passed 00:16:10.978 Test: blob_create ...[2024-04-24 01:46:06.689957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:16:10.978 passed 00:16:10.978 Test: blob_create_loop ...passed 00:16:10.978 Test: blob_create_fail ...[2024-04-24 01:46:06.783315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.978 passed 00:16:10.978 Test: blob_create_internal ...passed 00:16:10.978 Test: blob_create_zero_extent ...passed 00:16:10.978 Test: blob_snapshot ...passed 00:16:10.978 Test: blob_clone ...passed 00:16:10.978 Test: blob_inflate ...[2024-04-24 01:46:06.969507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:16:10.978 passed 00:16:10.978 Test: blob_delete ...passed 00:16:10.978 Test: blob_resize_test ...[2024-04-24 01:46:07.035584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:16:10.978 passed 00:16:10.978 Test: channel_ops ...passed 00:16:10.978 Test: blob_super ...passed 00:16:10.978 Test: blob_rw_verify_iov ...passed 00:16:10.978 Test: blob_unmap ...passed 00:16:10.978 Test: blob_iter ...passed 00:16:10.978 Test: blob_parse_md ...passed 00:16:10.978 Test: bs_load_pending_removal ...passed 00:16:10.978 Test: bs_unload ...[2024-04-24 01:46:07.295132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:16:10.978 passed 00:16:10.978 Test: bs_usable_clusters ...passed 00:16:10.978 Test: blob_crc ...[2024-04-24 01:46:07.360372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:10.978 [2024-04-24 01:46:07.360505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:10.978 passed 00:16:10.978 Test: blob_flags ...passed 00:16:10.978 Test: bs_version ...passed 00:16:10.978 Test: blob_set_xattrs_test ...[2024-04-24 01:46:07.459288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.978 [2024-04-24 01:46:07.459384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.978 passed 00:16:10.978 Test: blob_thin_prov_alloc ...passed 00:16:10.978 Test: blob_insert_cluster_msg_test ...passed 00:16:10.978 Test: blob_thin_prov_rw ...passed 00:16:10.978 Test: blob_thin_prov_rle ...passed 00:16:10.978 Test: blob_thin_prov_rw_iov ...passed 00:16:10.978 Test: blob_snapshot_rw ...passed 00:16:10.978 Test: blob_snapshot_rw_iov ...passed 00:16:10.978 Test: blob_inflate_rw ...passed 00:16:10.978 Test: blob_snapshot_freeze_io ...passed 00:16:10.978 Test: blob_operation_split_rw ...passed 00:16:10.978 Test: blob_operation_split_rw_iov ...passed 00:16:10.978 Test: blob_simultaneous_operations ...[2024-04-24 01:46:08.364651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:10.978 [2024-04-24 01:46:08.364755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:08.365997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:10.978 [2024-04-24 01:46:08.366054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:08.378124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:10.978 [2024-04-24 01:46:08.378183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 [2024-04-24 01:46:08.378297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:10.978 [2024-04-24 01:46:08.378322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.978 passed 00:16:10.978 Test: blob_persist_test ...passed 00:16:10.978 Test: blob_decouple_snapshot ...passed 00:16:10.978 Test: blob_seek_io_unit ...passed 00:16:10.978 Test: blob_nested_freezes ...passed 00:16:10.978 Suite: blob_blob_nocopy_noextent 00:16:10.978 Test: blob_write ...passed 00:16:10.978 Test: blob_read ...passed 00:16:10.978 Test: blob_rw_verify ...passed 00:16:10.978 Test: blob_rw_verify_iov_nomem ...passed 00:16:10.978 Test: blob_rw_iov_read_only ...passed 00:16:10.978 Test: blob_xattr ...passed 00:16:10.978 Test: blob_dirty_shutdown ...passed 00:16:10.978 Test: blob_is_degraded ...passed 00:16:10.978 Suite: blob_esnap_bs_nocopy_noextent 00:16:10.978 Test: blob_esnap_create ...passed 00:16:10.978 Test: blob_esnap_thread_add_remove ...passed 00:16:10.978 Test: blob_esnap_clone_snapshot ...passed 00:16:10.978 Test: blob_esnap_clone_inflate ...passed 00:16:10.978 Test: blob_esnap_clone_decouple ...passed 00:16:10.978 Test: blob_esnap_clone_reload ...passed 00:16:10.978 Test: blob_esnap_hotplug ...passed 00:16:10.978 Suite: blob_nocopy_extent 00:16:10.978 Test: blob_init ...[2024-04-24 01:46:09.059803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:16:10.978 passed 00:16:10.978 Test: blob_thin_provision ...passed 00:16:10.978 Test: blob_read_only ...passed 00:16:10.978 Test: bs_load ...[2024-04-24 01:46:09.106088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:16:10.978 passed 00:16:10.978 Test: bs_load_custom_cluster_size ...passed 00:16:10.978 Test: bs_load_after_failed_grow ...passed 00:16:10.978 Test: bs_cluster_sz ...[2024-04-24 01:46:09.131770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:16:10.978 [2024-04-24 01:46:09.132078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:16:10.978 [2024-04-24 01:46:09.132304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:16:10.978 passed 00:16:10.978 Test: bs_resize_md ...passed 00:16:10.979 Test: bs_destroy ...passed 00:16:10.979 Test: bs_type ...passed 00:16:10.979 Test: bs_super_block ...passed 00:16:10.979 Test: bs_test_recover_cluster_count ...passed 00:16:10.979 Test: bs_grow_live ...passed 00:16:10.979 Test: bs_grow_live_no_space ...passed 00:16:10.979 Test: bs_test_grow ...passed 00:16:10.979 Test: blob_serialize_test ...passed 00:16:10.979 Test: super_block_crc ...passed 00:16:10.979 Test: blob_thin_prov_write_count_io ...passed 00:16:10.979 Test: blob_thin_prov_unmap_cluster ...passed 00:16:10.979 Test: bs_load_iter_test ...passed 00:16:10.979 Test: blob_relations ...[2024-04-24 01:46:09.306263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.306535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.307393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.307541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 passed 00:16:10.979 Test: blob_relations2 ...[2024-04-24 01:46:09.320870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.321167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.321239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.321342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.322588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.322782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.323199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:10.979 [2024-04-24 01:46:09.323342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 passed 00:16:10.979 Test: blob_relations3 ...passed 00:16:10.979 Test: blobstore_clean_power_failure ...passed 00:16:10.979 Test: blob_delete_snapshot_power_failure ...[2024-04-24 01:46:09.475502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:10.979 [2024-04-24 01:46:09.487612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:10.979 [2024-04-24 01:46:09.499687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:10.979 [2024-04-24 01:46:09.499950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:10.979 [2024-04-24 01:46:09.500032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.511942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:10.979 [2024-04-24 01:46:09.512213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:10.979 [2024-04-24 01:46:09.512305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:10.979 [2024-04-24 01:46:09.512416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.524351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:10.979 [2024-04-24 01:46:09.524666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:10.979 [2024-04-24 01:46:09.524739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:10.979 [2024-04-24 01:46:09.524896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.537288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:16:10.979 [2024-04-24 01:46:09.537617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.549977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:16:10.979 [2024-04-24 01:46:09.550292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 [2024-04-24 01:46:09.562445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:16:10.979 [2024-04-24 01:46:09.562753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:10.979 passed 00:16:10.979 Test: blob_create_snapshot_power_failure ...[2024-04-24 01:46:09.598741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:10.979 [2024-04-24 01:46:09.610623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:10.979 [2024-04-24 01:46:09.634174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:10.979 [2024-04-24 01:46:09.646971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:16:10.979 passed 00:16:10.979 Test: blob_io_unit ...passed 00:16:10.979 Test: blob_io_unit_compatibility ...passed 00:16:10.979 Test: blob_ext_md_pages ...passed 00:16:10.979 Test: blob_esnap_io_4096_4096 ...passed 00:16:10.979 Test: blob_esnap_io_512_512 ...passed 00:16:10.979 Test: blob_esnap_io_4096_512 ...passed 00:16:10.979 Test: blob_esnap_io_512_4096 ...passed 00:16:10.979 Suite: blob_bs_nocopy_extent 00:16:10.979 Test: blob_open ...passed 00:16:10.979 Test: blob_create ...[2024-04-24 01:46:09.883130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:16:10.979 passed 00:16:10.979 Test: blob_create_loop ...passed 00:16:10.979 Test: blob_create_fail ...[2024-04-24 01:46:09.982988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.979 passed 00:16:10.979 Test: blob_create_internal ...passed 00:16:10.979 Test: blob_create_zero_extent ...passed 00:16:10.979 Test: blob_snapshot ...passed 00:16:10.979 Test: blob_clone ...passed 00:16:10.979 Test: blob_inflate ...[2024-04-24 01:46:10.167481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:16:10.979 passed 00:16:10.979 Test: blob_delete ...passed 00:16:10.979 Test: blob_resize_test ...[2024-04-24 01:46:10.234699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:16:10.979 passed 00:16:10.979 Test: channel_ops ...passed 00:16:10.979 Test: blob_super ...passed 00:16:10.979 Test: blob_rw_verify_iov ...passed 00:16:10.979 Test: blob_unmap ...passed 00:16:10.979 Test: blob_iter ...passed 00:16:10.979 Test: blob_parse_md ...passed 00:16:10.979 Test: bs_load_pending_removal ...passed 00:16:10.979 Test: bs_unload ...[2024-04-24 01:46:10.502829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:16:10.979 passed 00:16:10.979 Test: bs_usable_clusters ...passed 00:16:10.979 Test: blob_crc ...[2024-04-24 01:46:10.570759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:10.979 [2024-04-24 01:46:10.571093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:10.979 passed 00:16:10.979 Test: blob_flags ...passed 00:16:10.979 Test: bs_version ...passed 00:16:10.979 Test: blob_set_xattrs_test ...[2024-04-24 01:46:10.671846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.979 [2024-04-24 01:46:10.672158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:10.979 passed 00:16:10.979 Test: blob_thin_prov_alloc ...passed 00:16:10.979 Test: blob_insert_cluster_msg_test ...passed 00:16:10.979 Test: blob_thin_prov_rw ...passed 00:16:10.979 Test: blob_thin_prov_rle ...passed 00:16:10.979 Test: blob_thin_prov_rw_iov ...passed 00:16:10.979 Test: blob_snapshot_rw ...passed 00:16:10.979 Test: blob_snapshot_rw_iov ...passed 00:16:11.238 Test: blob_inflate_rw ...passed 00:16:11.238 Test: blob_snapshot_freeze_io ...passed 00:16:11.495 Test: blob_operation_split_rw ...passed 00:16:11.495 Test: blob_operation_split_rw_iov ...passed 00:16:11.753 Test: blob_simultaneous_operations ...[2024-04-24 01:46:11.581356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:11.753 [2024-04-24 01:46:11.581690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:11.753 [2024-04-24 01:46:11.583013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:11.753 [2024-04-24 01:46:11.583188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:11.753 [2024-04-24 01:46:11.595715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:11.753 [2024-04-24 01:46:11.595940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:11.753 [2024-04-24 01:46:11.596108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:11.753 [2024-04-24 01:46:11.596346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:11.753 passed 00:16:11.753 Test: blob_persist_test ...passed 00:16:11.753 Test: blob_decouple_snapshot ...passed 00:16:11.753 Test: blob_seek_io_unit ...passed 00:16:11.753 Test: blob_nested_freezes ...passed 00:16:11.753 Suite: blob_blob_nocopy_extent 00:16:11.753 Test: blob_write ...passed 00:16:12.010 Test: blob_read ...passed 00:16:12.010 Test: blob_rw_verify ...passed 00:16:12.010 Test: blob_rw_verify_iov_nomem ...passed 00:16:12.011 Test: blob_rw_iov_read_only ...passed 00:16:12.011 Test: blob_xattr ...passed 00:16:12.011 Test: blob_dirty_shutdown ...passed 00:16:12.011 Test: blob_is_degraded ...passed 00:16:12.011 Suite: blob_esnap_bs_nocopy_extent 00:16:12.269 Test: blob_esnap_create ...passed 00:16:12.269 Test: blob_esnap_thread_add_remove ...passed 00:16:12.269 Test: blob_esnap_clone_snapshot ...passed 00:16:12.269 Test: blob_esnap_clone_inflate ...passed 00:16:12.269 Test: blob_esnap_clone_decouple ...passed 00:16:12.269 Test: blob_esnap_clone_reload ...passed 00:16:12.269 Test: blob_esnap_hotplug ...passed 00:16:12.269 Suite: blob_copy_noextent 00:16:12.270 Test: blob_init ...[2024-04-24 01:46:12.329034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:16:12.270 passed 00:16:12.270 Test: blob_thin_provision ...passed 00:16:12.528 Test: blob_read_only ...passed 00:16:12.528 Test: bs_load ...[2024-04-24 01:46:12.376923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:16:12.528 passed 00:16:12.528 Test: bs_load_custom_cluster_size ...passed 00:16:12.528 Test: bs_load_after_failed_grow ...passed 00:16:12.528 Test: bs_cluster_sz ...[2024-04-24 01:46:12.401839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:16:12.528 [2024-04-24 01:46:12.402067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:16:12.528 [2024-04-24 01:46:12.402299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:16:12.528 passed 00:16:12.528 Test: bs_resize_md ...passed 00:16:12.528 Test: bs_destroy ...passed 00:16:12.528 Test: bs_type ...passed 00:16:12.528 Test: bs_super_block ...passed 00:16:12.528 Test: bs_test_recover_cluster_count ...passed 00:16:12.528 Test: bs_grow_live ...passed 00:16:12.528 Test: bs_grow_live_no_space ...passed 00:16:12.528 Test: bs_test_grow ...passed 00:16:12.528 Test: blob_serialize_test ...passed 00:16:12.528 Test: super_block_crc ...passed 00:16:12.528 Test: blob_thin_prov_write_count_io ...passed 00:16:12.528 Test: blob_thin_prov_unmap_cluster ...passed 00:16:12.528 Test: bs_load_iter_test ...passed 00:16:12.528 Test: blob_relations ...[2024-04-24 01:46:12.593305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.593576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 [2024-04-24 01:46:12.594169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.594304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 passed 00:16:12.528 Test: blob_relations2 ...[2024-04-24 01:46:12.607998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.608325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 [2024-04-24 01:46:12.608392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.608480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 [2024-04-24 01:46:12.609386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.609562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 [2024-04-24 01:46:12.609901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:12.528 [2024-04-24 01:46:12.610035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.528 passed 00:16:12.786 Test: blob_relations3 ...passed 00:16:12.786 Test: blobstore_clean_power_failure ...passed 00:16:12.786 Test: blob_delete_snapshot_power_failure ...[2024-04-24 01:46:12.763498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:12.786 [2024-04-24 01:46:12.778587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:12.786 [2024-04-24 01:46:12.778868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:12.786 [2024-04-24 01:46:12.778949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.786 [2024-04-24 01:46:12.790639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:12.786 [2024-04-24 01:46:12.790880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:12.786 [2024-04-24 01:46:12.790940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:12.786 [2024-04-24 01:46:12.791043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.786 [2024-04-24 01:46:12.802701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:16:12.786 [2024-04-24 01:46:12.803009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.786 [2024-04-24 01:46:12.814676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:16:12.787 [2024-04-24 01:46:12.814983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.787 [2024-04-24 01:46:12.826710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:16:12.787 [2024-04-24 01:46:12.827006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:12.787 passed 00:16:12.787 Test: blob_create_snapshot_power_failure ...[2024-04-24 01:46:12.862510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:13.045 [2024-04-24 01:46:12.885665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:16:13.045 [2024-04-24 01:46:12.897637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:16:13.045 passed 00:16:13.045 Test: blob_io_unit ...passed 00:16:13.045 Test: blob_io_unit_compatibility ...passed 00:16:13.045 Test: blob_ext_md_pages ...passed 00:16:13.045 Test: blob_esnap_io_4096_4096 ...passed 00:16:13.045 Test: blob_esnap_io_512_512 ...passed 00:16:13.045 Test: blob_esnap_io_4096_512 ...passed 00:16:13.045 Test: blob_esnap_io_512_4096 ...passed 00:16:13.045 Suite: blob_bs_copy_noextent 00:16:13.045 Test: blob_open ...passed 00:16:13.303 Test: blob_create ...[2024-04-24 01:46:13.132909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:16:13.303 passed 00:16:13.303 Test: blob_create_loop ...passed 00:16:13.303 Test: blob_create_fail ...[2024-04-24 01:46:13.221632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:13.303 passed 00:16:13.303 Test: blob_create_internal ...passed 00:16:13.303 Test: blob_create_zero_extent ...passed 00:16:13.303 Test: blob_snapshot ...passed 00:16:13.303 Test: blob_clone ...passed 00:16:13.562 Test: blob_inflate ...[2024-04-24 01:46:13.394063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:16:13.562 passed 00:16:13.562 Test: blob_delete ...passed 00:16:13.562 Test: blob_resize_test ...[2024-04-24 01:46:13.458722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:16:13.562 passed 00:16:13.562 Test: channel_ops ...passed 00:16:13.562 Test: blob_super ...passed 00:16:13.562 Test: blob_rw_verify_iov ...passed 00:16:13.562 Test: blob_unmap ...passed 00:16:13.562 Test: blob_iter ...passed 00:16:13.820 Test: blob_parse_md ...passed 00:16:13.820 Test: bs_load_pending_removal ...passed 00:16:13.820 Test: bs_unload ...[2024-04-24 01:46:13.717610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:16:13.820 passed 00:16:13.820 Test: bs_usable_clusters ...passed 00:16:13.820 Test: blob_crc ...[2024-04-24 01:46:13.783604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:13.820 [2024-04-24 01:46:13.783948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:13.820 passed 00:16:13.820 Test: blob_flags ...passed 00:16:13.820 Test: bs_version ...passed 00:16:13.821 Test: blob_set_xattrs_test ...[2024-04-24 01:46:13.883877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:13.821 [2024-04-24 01:46:13.884187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:13.821 passed 00:16:14.079 Test: blob_thin_prov_alloc ...passed 00:16:14.079 Test: blob_insert_cluster_msg_test ...passed 00:16:14.079 Test: blob_thin_prov_rw ...passed 00:16:14.079 Test: blob_thin_prov_rle ...passed 00:16:14.337 Test: blob_thin_prov_rw_iov ...passed 00:16:14.337 Test: blob_snapshot_rw ...passed 00:16:14.337 Test: blob_snapshot_rw_iov ...passed 00:16:14.594 Test: blob_inflate_rw ...passed 00:16:14.594 Test: blob_snapshot_freeze_io ...passed 00:16:14.594 Test: blob_operation_split_rw ...passed 00:16:14.852 Test: blob_operation_split_rw_iov ...passed 00:16:14.852 Test: blob_simultaneous_operations ...[2024-04-24 01:46:14.794419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:14.852 [2024-04-24 01:46:14.794701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:14.852 [2024-04-24 01:46:14.795203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:14.852 [2024-04-24 01:46:14.795359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:14.852 [2024-04-24 01:46:14.797937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:14.852 [2024-04-24 01:46:14.798095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:14.852 [2024-04-24 01:46:14.798262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:14.852 [2024-04-24 01:46:14.798420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:14.852 passed 00:16:14.852 Test: blob_persist_test ...passed 00:16:14.852 Test: blob_decouple_snapshot ...passed 00:16:14.852 Test: blob_seek_io_unit ...passed 00:16:15.111 Test: blob_nested_freezes ...passed 00:16:15.111 Suite: blob_blob_copy_noextent 00:16:15.111 Test: blob_write ...passed 00:16:15.111 Test: blob_read ...passed 00:16:15.111 Test: blob_rw_verify ...passed 00:16:15.111 Test: blob_rw_verify_iov_nomem ...passed 00:16:15.111 Test: blob_rw_iov_read_only ...passed 00:16:15.111 Test: blob_xattr ...passed 00:16:15.397 Test: blob_dirty_shutdown ...passed 00:16:15.397 Test: blob_is_degraded ...passed 00:16:15.397 Suite: blob_esnap_bs_copy_noextent 00:16:15.397 Test: blob_esnap_create ...passed 00:16:15.397 Test: blob_esnap_thread_add_remove ...passed 00:16:15.397 Test: blob_esnap_clone_snapshot ...passed 00:16:15.397 Test: blob_esnap_clone_inflate ...passed 00:16:15.397 Test: blob_esnap_clone_decouple ...passed 00:16:15.397 Test: blob_esnap_clone_reload ...passed 00:16:15.397 Test: blob_esnap_hotplug ...passed 00:16:15.397 Suite: blob_copy_extent 00:16:15.397 Test: blob_init ...[2024-04-24 01:46:15.467454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:16:15.397 passed 00:16:15.655 Test: blob_thin_provision ...passed 00:16:15.655 Test: blob_read_only ...passed 00:16:15.655 Test: bs_load ...[2024-04-24 01:46:15.518786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:16:15.655 passed 00:16:15.655 Test: bs_load_custom_cluster_size ...passed 00:16:15.655 Test: bs_load_after_failed_grow ...passed 00:16:15.655 Test: bs_cluster_sz ...[2024-04-24 01:46:15.545018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:16:15.655 [2024-04-24 01:46:15.545281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:16:15.655 [2024-04-24 01:46:15.545528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:16:15.655 passed 00:16:15.655 Test: bs_resize_md ...passed 00:16:15.655 Test: bs_destroy ...passed 00:16:15.655 Test: bs_type ...passed 00:16:15.655 Test: bs_super_block ...passed 00:16:15.655 Test: bs_test_recover_cluster_count ...passed 00:16:15.655 Test: bs_grow_live ...passed 00:16:15.655 Test: bs_grow_live_no_space ...passed 00:16:15.655 Test: bs_test_grow ...passed 00:16:15.655 Test: blob_serialize_test ...passed 00:16:15.655 Test: super_block_crc ...passed 00:16:15.655 Test: blob_thin_prov_write_count_io ...passed 00:16:15.655 Test: blob_thin_prov_unmap_cluster ...passed 00:16:15.655 Test: bs_load_iter_test ...passed 00:16:15.655 Test: blob_relations ...[2024-04-24 01:46:15.727314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.655 [2024-04-24 01:46:15.727642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.655 [2024-04-24 01:46:15.728412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.655 [2024-04-24 01:46:15.728580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.655 passed 00:16:15.914 Test: blob_relations2 ...[2024-04-24 01:46:15.743186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.914 [2024-04-24 01:46:15.743480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.743564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.914 [2024-04-24 01:46:15.743673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.744841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.914 [2024-04-24 01:46:15.745025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.745416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:16:15.914 [2024-04-24 01:46:15.745580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 passed 00:16:15.914 Test: blob_relations3 ...passed 00:16:15.914 Test: blobstore_clean_power_failure ...passed 00:16:15.914 Test: blob_delete_snapshot_power_failure ...[2024-04-24 01:46:15.906369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:15.914 [2024-04-24 01:46:15.919083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:15.914 [2024-04-24 01:46:15.931844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:15.914 [2024-04-24 01:46:15.932131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:15.914 [2024-04-24 01:46:15.932206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.944696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:15.914 [2024-04-24 01:46:15.944954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:15.914 [2024-04-24 01:46:15.945028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:15.914 [2024-04-24 01:46:15.945145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.957632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:15.914 [2024-04-24 01:46:15.957913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:16:15.914 [2024-04-24 01:46:15.957994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:16:15.914 [2024-04-24 01:46:15.958114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.970630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:16:15.914 [2024-04-24 01:46:15.970930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.983486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:16:15.914 [2024-04-24 01:46:15.983826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:15.914 [2024-04-24 01:46:15.996481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:16:15.914 [2024-04-24 01:46:15.996775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:16.173 passed 00:16:16.173 Test: blob_create_snapshot_power_failure ...[2024-04-24 01:46:16.034629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:16:16.173 [2024-04-24 01:46:16.047105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:16:16.173 [2024-04-24 01:46:16.071890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:16:16.173 [2024-04-24 01:46:16.084610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:16:16.173 passed 00:16:16.173 Test: blob_io_unit ...passed 00:16:16.173 Test: blob_io_unit_compatibility ...passed 00:16:16.173 Test: blob_ext_md_pages ...passed 00:16:16.173 Test: blob_esnap_io_4096_4096 ...passed 00:16:16.173 Test: blob_esnap_io_512_512 ...passed 00:16:16.173 Test: blob_esnap_io_4096_512 ...passed 00:16:16.431 Test: blob_esnap_io_512_4096 ...passed 00:16:16.431 Suite: blob_bs_copy_extent 00:16:16.431 Test: blob_open ...passed 00:16:16.431 Test: blob_create ...[2024-04-24 01:46:16.327662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:16:16.431 passed 00:16:16.431 Test: blob_create_loop ...passed 00:16:16.431 Test: blob_create_fail ...[2024-04-24 01:46:16.423960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:16.431 passed 00:16:16.431 Test: blob_create_internal ...passed 00:16:16.431 Test: blob_create_zero_extent ...passed 00:16:16.689 Test: blob_snapshot ...passed 00:16:16.689 Test: blob_clone ...passed 00:16:16.689 Test: blob_inflate ...[2024-04-24 01:46:16.598888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:16:16.689 passed 00:16:16.689 Test: blob_delete ...passed 00:16:16.689 Test: blob_resize_test ...[2024-04-24 01:46:16.664433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:16:16.689 passed 00:16:16.689 Test: channel_ops ...passed 00:16:16.689 Test: blob_super ...passed 00:16:16.947 Test: blob_rw_verify_iov ...passed 00:16:16.947 Test: blob_unmap ...passed 00:16:16.947 Test: blob_iter ...passed 00:16:16.947 Test: blob_parse_md ...passed 00:16:16.947 Test: bs_load_pending_removal ...passed 00:16:16.947 Test: bs_unload ...[2024-04-24 01:46:16.928994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:16:16.947 passed 00:16:16.947 Test: bs_usable_clusters ...passed 00:16:16.947 Test: blob_crc ...[2024-04-24 01:46:16.995392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:16.947 [2024-04-24 01:46:16.995511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:16:16.947 passed 00:16:17.205 Test: blob_flags ...passed 00:16:17.205 Test: bs_version ...passed 00:16:17.205 Test: blob_set_xattrs_test ...[2024-04-24 01:46:17.101518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:17.205 [2024-04-24 01:46:17.101632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:16:17.205 passed 00:16:17.205 Test: blob_thin_prov_alloc ...passed 00:16:17.205 Test: blob_insert_cluster_msg_test ...passed 00:16:17.463 Test: blob_thin_prov_rw ...passed 00:16:17.463 Test: blob_thin_prov_rle ...passed 00:16:17.463 Test: blob_thin_prov_rw_iov ...passed 00:16:17.463 Test: blob_snapshot_rw ...passed 00:16:17.463 Test: blob_snapshot_rw_iov ...passed 00:16:17.720 Test: blob_inflate_rw ...passed 00:16:17.720 Test: blob_snapshot_freeze_io ...passed 00:16:17.977 Test: blob_operation_split_rw ...passed 00:16:17.977 Test: blob_operation_split_rw_iov ...passed 00:16:17.977 Test: blob_simultaneous_operations ...[2024-04-24 01:46:17.987206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:17.977 [2024-04-24 01:46:17.987319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:17.977 [2024-04-24 01:46:17.987814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:17.977 [2024-04-24 01:46:17.987869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:17.977 [2024-04-24 01:46:17.990332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:17.977 [2024-04-24 01:46:17.990389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:17.977 [2024-04-24 01:46:17.990511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:16:17.977 [2024-04-24 01:46:17.990577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:16:17.977 passed 00:16:17.977 Test: blob_persist_test ...passed 00:16:18.236 Test: blob_decouple_snapshot ...passed 00:16:18.236 Test: blob_seek_io_unit ...passed 00:16:18.236 Test: blob_nested_freezes ...passed 00:16:18.236 Suite: blob_blob_copy_extent 00:16:18.236 Test: blob_write ...passed 00:16:18.236 Test: blob_read ...passed 00:16:18.236 Test: blob_rw_verify ...passed 00:16:18.236 Test: blob_rw_verify_iov_nomem ...passed 00:16:18.495 Test: blob_rw_iov_read_only ...passed 00:16:18.495 Test: blob_xattr ...passed 00:16:18.495 Test: blob_dirty_shutdown ...passed 00:16:18.495 Test: blob_is_degraded ...passed 00:16:18.495 Suite: blob_esnap_bs_copy_extent 00:16:18.495 Test: blob_esnap_create ...passed 00:16:18.495 Test: blob_esnap_thread_add_remove ...passed 00:16:18.495 Test: blob_esnap_clone_snapshot ...passed 00:16:18.495 Test: blob_esnap_clone_inflate ...passed 00:16:18.754 Test: blob_esnap_clone_decouple ...passed 00:16:18.754 Test: blob_esnap_clone_reload ...passed 00:16:18.754 Test: blob_esnap_hotplug ...passed 00:16:18.754 00:16:18.754 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.754 suites 16 16 n/a 0 0 00:16:18.754 tests 352 352 352 0 0 00:16:18.754 asserts 93211 93211 93211 0 n/a 00:16:18.754 00:16:18.754 Elapsed time = 12.806 seconds 00:16:18.754 01:46:18 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:16:18.754 00:16:18.754 00:16:18.754 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.754 http://cunit.sourceforge.net/ 00:16:18.754 00:16:18.754 00:16:18.754 Suite: blob_bdev 00:16:18.754 Test: create_bs_dev ...passed 00:16:18.754 Test: create_bs_dev_ro ...[2024-04-24 01:46:18.787804] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:16:18.754 passed 00:16:18.754 Test: create_bs_dev_rw ...passed 00:16:18.754 Test: claim_bs_dev ...[2024-04-24 01:46:18.788354] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:16:18.754 passed 00:16:18.754 Test: claim_bs_dev_ro ...passed 00:16:18.754 Test: deferred_destroy_refs ...passed 00:16:18.754 Test: deferred_destroy_channels ...passed 00:16:18.754 Test: deferred_destroy_threads ...passed 00:16:18.754 00:16:18.754 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.754 suites 1 1 n/a 0 0 00:16:18.754 tests 8 8 8 0 0 00:16:18.754 asserts 119 119 119 0 n/a 00:16:18.754 00:16:18.754 Elapsed time = 0.001 seconds 00:16:18.754 01:46:18 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:16:18.754 00:16:18.754 00:16:18.754 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.754 http://cunit.sourceforge.net/ 00:16:18.754 00:16:18.754 00:16:18.754 Suite: tree 00:16:18.754 Test: blobfs_tree_op_test ...passed 00:16:18.754 00:16:18.754 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.754 suites 1 1 n/a 0 0 00:16:18.754 tests 1 1 1 0 0 00:16:18.754 asserts 27 27 27 0 n/a 00:16:18.754 00:16:18.754 Elapsed time = 0.000 seconds 00:16:19.040 01:46:18 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:16:19.040 00:16:19.040 00:16:19.040 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.040 http://cunit.sourceforge.net/ 00:16:19.040 00:16:19.040 00:16:19.040 Suite: blobfs_async_ut 00:16:19.040 Test: fs_init ...passed 00:16:19.040 Test: fs_open ...passed 00:16:19.040 Test: fs_create ...passed 00:16:19.040 Test: fs_truncate ...passed 00:16:19.040 Test: fs_rename ...[2024-04-24 01:46:19.018704] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:16:19.040 passed 00:16:19.040 Test: fs_rw_async ...passed 00:16:19.040 Test: fs_writev_readv_async ...passed 00:16:19.040 Test: tree_find_buffer_ut ...passed 00:16:19.040 Test: channel_ops ...passed 00:16:19.040 Test: channel_ops_sync ...passed 00:16:19.040 00:16:19.040 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.040 suites 1 1 n/a 0 0 00:16:19.040 tests 10 10 10 0 0 00:16:19.040 asserts 292 292 292 0 n/a 00:16:19.040 00:16:19.040 Elapsed time = 0.198 seconds 00:16:19.330 01:46:19 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:16:19.330 00:16:19.330 00:16:19.331 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.331 http://cunit.sourceforge.net/ 00:16:19.331 00:16:19.331 00:16:19.331 Suite: blobfs_sync_ut 00:16:19.331 Test: cache_read_after_write ...[2024-04-24 01:46:19.220886] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:16:19.331 passed 00:16:19.331 Test: file_length ...passed 00:16:19.331 Test: append_write_to_extend_blob ...passed 00:16:19.331 Test: partial_buffer ...passed 00:16:19.331 Test: cache_write_null_buffer ...passed 00:16:19.331 Test: fs_create_sync ...passed 00:16:19.331 Test: fs_rename_sync ...passed 00:16:19.331 Test: cache_append_no_cache ...passed 00:16:19.331 Test: fs_delete_file_without_close ...passed 00:16:19.331 00:16:19.331 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.331 suites 1 1 n/a 0 0 00:16:19.331 tests 9 9 9 0 0 00:16:19.331 asserts 345 345 345 0 n/a 00:16:19.331 00:16:19.331 Elapsed time = 0.379 seconds 00:16:19.331 01:46:19 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:16:19.591 00:16:19.591 00:16:19.591 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.591 http://cunit.sourceforge.net/ 00:16:19.591 00:16:19.591 00:16:19.591 Suite: blobfs_bdev_ut 00:16:19.591 Test: spdk_blobfs_bdev_detect_test ...[2024-04-24 01:46:19.418999] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:16:19.591 passed 00:16:19.591 Test: spdk_blobfs_bdev_create_test ...[2024-04-24 01:46:19.420007] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:16:19.591 passed 00:16:19.591 Test: spdk_blobfs_bdev_mount_test ...passed 00:16:19.591 00:16:19.591 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.591 suites 1 1 n/a 0 0 00:16:19.591 tests 3 3 3 0 0 00:16:19.591 asserts 9 9 9 0 n/a 00:16:19.591 00:16:19.591 Elapsed time = 0.001 seconds 00:16:19.591 00:16:19.591 real 0m13.686s 00:16:19.591 user 0m13.093s 00:16:19.591 sys 0m0.731s 00:16:19.591 01:46:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.591 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 ************************************ 00:16:19.591 END TEST unittest_blob_blobfs 00:16:19.591 ************************************ 00:16:19.591 01:46:19 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:16:19.591 01:46:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.591 01:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.591 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 ************************************ 00:16:19.591 START TEST unittest_event 00:16:19.591 ************************************ 00:16:19.591 01:46:19 -- common/autotest_common.sh@1111 -- # unittest_event 00:16:19.591 01:46:19 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:16:19.591 00:16:19.591 00:16:19.591 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.591 http://cunit.sourceforge.net/ 00:16:19.591 00:16:19.591 00:16:19.591 Suite: app_suite 00:16:19.591 Test: test_spdk_app_parse_args ...app_ut [options] 00:16:19.591 00:16:19.591 CPU options: 00:16:19.591 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:16:19.591 (like [0,1,10]) 00:16:19.591 --lcores lcore to CPU mapping list. The list is in the format: 00:16:19.592 [<,lcores[@CPUs]>...] 00:16:19.592 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:16:19.592 Within the group, '-' is used for range separator, 00:16:19.592 ',' is used for single number separator. 00:16:19.592 '( )' can be omitted for single element group, 00:16:19.592 '@' can be omitted if cpus and lcores have the same value 00:16:19.592 --disable-cpumask-locks Disable CPU core lock files. 00:16:19.592 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:16:19.592 pollers in the app support interrupt mode) 00:16:19.592 -p, --main-core main (primary) core for DPDK 00:16:19.592 00:16:19.592 Configuration options: 00:16:19.592 -c, --config, --json JSON config file 00:16:19.592 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:16:19.592 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:16:19.592 --wait-for-rpc wait for RPCs to initialize subsystems 00:16:19.592 --rpcs-allowed comma-separated list of permitted RPCS 00:16:19.592 --json-ignore-init-errors don't exit on invalid config entry 00:16:19.592 00:16:19.592 Memory options: 00:16:19.592 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:16:19.592 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:16:19.592 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:16:19.592 -R, --huge-unlink unlink huge files after initialization 00:16:19.592 -n, --mem-channels number of memory channels used for DPDK 00:16:19.592 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:16:19.592 --msg-mempool-size global message memory pool size in count (default: 262143) 00:16:19.592 --no-huge run without using hugepages 00:16:19.592 -i, --shm-id shared memory ID (optional) 00:16:19.592 -g, --single-file-segments force creating just one hugetlbfs file 00:16:19.592 00:16:19.592 PCI options: 00:16:19.592 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:16:19.592 -B, --pci-blocked pci addr to block (can be used more than once) 00:16:19.592 -u, --no-pci disable PCI access 00:16:19.592 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:16:19.592 00:16:19.592 Log options: 00:16:19.592 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:16:19.592 --silence-noticelog disable notice level logging to stderr 00:16:19.592 app_ut: invalid option -- 'z' 00:16:19.592 00:16:19.592 Trace options: 00:16:19.592 --num-trace-entries number of trace entries for each core, must be power of 2, 00:16:19.592 setting 0 to disable trace (default 32768) 00:16:19.592 Tracepoints vary in size and can use more than one trace entry. 00:16:19.592 -e, --tpoint-group [:] 00:16:19.592 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:16:19.592 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:16:19.592 a tracepoint group. First tpoint inside a group can be enabled by 00:16:19.592 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:16:19.592 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:16:19.592 in /include/spdk_internal/trace_defs.h 00:16:19.592 00:16:19.592 Other options: 00:16:19.592 -h, --help show this usage 00:16:19.592 -v, --version print SPDK version 00:16:19.592 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:16:19.592 --env-context Opaque context for use of the env implementation 00:16:19.592 app_ut [options] 00:16:19.592 00:16:19.592 CPU options: 00:16:19.592 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:16:19.592 (like [0,1,10]) 00:16:19.592 --lcores lcore to CPU mapping list. The list is in the format: 00:16:19.592 [<,lcores[@CPUs]>...] 00:16:19.592 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:16:19.592 Within the group, '-' is used for range separator, 00:16:19.592 ',' is used for single number separator. 00:16:19.592 '( )' can be omitted for single element group, 00:16:19.592 '@' can be omitted if cpus and lcores have the same value 00:16:19.592 --disable-cpumask-locks Disable CPU core lock files. 00:16:19.592 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:16:19.592 pollers in the app support interrupt mode) 00:16:19.592 -p, --main-core main (primary) core for DPDK 00:16:19.592 00:16:19.592 Configuration options: 00:16:19.592 -c, --config, --json JSON config file 00:16:19.592 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:16:19.592 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:16:19.592 --wait-for-rpc wait for RPCs to initialize subsystems 00:16:19.592 --rpcs-allowed comma-separated list of permitted RPCS 00:16:19.592 --json-ignore-init-errors don't exit on invalid config entry 00:16:19.592 00:16:19.592 Memory options: 00:16:19.592 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:16:19.592 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:16:19.592 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:16:19.592 -R, --huge-unlink unlink huge files after initialization 00:16:19.592 -n, --mem-channels number of memory channels used for DPDK 00:16:19.592 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:16:19.592 --msg-mempool-size global message memory pool size in count (default: 262143) 00:16:19.592 --no-huge run without using hugepages 00:16:19.592 -i, --shm-id shared memory ID (optional) 00:16:19.592 -g, --single-file-segments force creating just one hugetlbfs file 00:16:19.592 00:16:19.592 PCI options: 00:16:19.592 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:16:19.592 -B, --pci-blocked pci addr to block (can be used more than once) 00:16:19.592 -u, --no-pci disable PCI access 00:16:19.592 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:16:19.592 00:16:19.592 Log options: 00:16:19.592 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:16:19.592 app_ut: unrecognized option '--test-long-opt' 00:16:19.592 --silence-noticelog disable notice level logging to stderr 00:16:19.592 00:16:19.592 Trace options: 00:16:19.592 --num-trace-entries number of trace entries for each core, must be power of 2, 00:16:19.592 setting 0 to disable trace (default 32768) 00:16:19.592 Tracepoints vary in size and can use more than one trace entry. 00:16:19.592 -e, --tpoint-group [:] 00:16:19.592 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:16:19.592 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:16:19.592 a tracepoint group. First tpoint inside a group can be enabled by 00:16:19.592 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:16:19.592 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:16:19.592 in /include/spdk_internal/trace_defs.h 00:16:19.592 00:16:19.592 Other options: 00:16:19.592 -h, --help show this usage 00:16:19.592 -v, --version print SPDK version 00:16:19.592 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:16:19.592 --env-context Opaque context for use of the env implementation 00:16:19.592 [2024-04-24 01:46:19.558306] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1105:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:16:19.592 [2024-04-24 01:46:19.558743] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1286:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:16:19.592 app_ut [options] 00:16:19.592 00:16:19.592 CPU options: 00:16:19.592 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:16:19.592 (like [0,1,10]) 00:16:19.592 --lcores lcore to CPU mapping list. The list is in the format: 00:16:19.592 [<,lcores[@CPUs]>...] 00:16:19.592 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:16:19.592 Within the group, '-' is used for range separator, 00:16:19.592 ',' is used for single number separator. 00:16:19.592 '( )' can be omitted for single element group, 00:16:19.593 '@' can be omitted if cpus and lcores have the same value 00:16:19.593 --disable-cpumask-locks Disable CPU core lock files. 00:16:19.593 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:16:19.593 pollers in the app support interrupt mode) 00:16:19.593 -p, --main-core main (primary) core for DPDK 00:16:19.593 00:16:19.593 Configuration options: 00:16:19.593 -c, --config, --json JSON config file 00:16:19.593 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:16:19.593 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:16:19.593 --wait-for-rpc wait for RPCs to initialize subsystems 00:16:19.593 --rpcs-allowed comma-separated list of permitted RPCS 00:16:19.593 --json-ignore-init-errors don't exit on invalid config entry 00:16:19.593 00:16:19.593 Memory options: 00:16:19.593 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:16:19.593 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:16:19.593 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:16:19.593 -R, --huge-unlink unlink huge files after initialization 00:16:19.593 -n, --mem-channels number of memory channels used for DPDK 00:16:19.593 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:16:19.593 --msg-mempool-size global message memory pool size in count (default: 262143) 00:16:19.593 --no-huge run without using hugepages 00:16:19.593 -i, --shm-id shared memory ID (optional) 00:16:19.593 -g, --single-file-segments force creating just one hugetlbfs file 00:16:19.593 00:16:19.593 PCI options: 00:16:19.593 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:16:19.593 -B, --pci-blocked pci addr to block (can be used more than once) 00:16:19.593 -u, --no-pci disable PCI access 00:16:19.593 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:16:19.593 00:16:19.593 Log options: 00:16:19.593 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:16:19.593 --silence-noticelog disable notice level logging to stderr 00:16:19.593 00:16:19.593 Trace options: 00:16:19.593 --num-trace-entries number of trace entries for each core, must be power of 2, 00:16:19.593 setting 0 to disable trace (default 32768) 00:16:19.593 Tracepoints vary in size and can use more than one trace entry. 00:16:19.593 -e, --tpoint-group [:] 00:16:19.593 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:16:19.593 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:16:19.593 a tracepoint group. First tpoint inside a group can be enabled by 00:16:19.593 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:16:19.593 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:16:19.593 in /include/spdk_internal/trace_defs.h 00:16:19.593 00:16:19.593 Other options: 00:16:19.593 -h, --help show this usage 00:16:19.593 -v, --version print SPDK version 00:16:19.593 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:16:19.593 --env-context Opaque context for use of the env implementation 00:16:19.593 passed 00:16:19.593 00:16:19.593 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.593 suites 1 1 n/a 0 0 00:16:19.593 tests 1 1 1 0 0 00:16:19.593 asserts 8 8 8 0 n/a 00:16:19.593 00:16:19.593 Elapsed time = 0.002 seconds 00:16:19.593 [2024-04-24 01:46:19.559108] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:16:19.593 01:46:19 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:16:19.593 00:16:19.593 00:16:19.593 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.593 http://cunit.sourceforge.net/ 00:16:19.593 00:16:19.593 00:16:19.593 Suite: app_suite 00:16:19.593 Test: test_create_reactor ...passed 00:16:19.593 Test: test_init_reactors ...passed 00:16:19.593 Test: test_event_call ...passed 00:16:19.593 Test: test_schedule_thread ...passed 00:16:19.593 Test: test_reschedule_thread ...passed 00:16:19.593 Test: test_bind_thread ...passed 00:16:19.593 Test: test_for_each_reactor ...passed 00:16:19.593 Test: test_reactor_stats ...passed 00:16:19.593 Test: test_scheduler ...passed 00:16:19.593 Test: test_governor ...passed 00:16:19.593 00:16:19.593 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.593 suites 1 1 n/a 0 0 00:16:19.593 tests 10 10 10 0 0 00:16:19.593 asserts 344 344 344 0 n/a 00:16:19.593 00:16:19.593 Elapsed time = 0.016 seconds 00:16:19.593 00:16:19.593 real 0m0.091s 00:16:19.593 user 0m0.052s 00:16:19.593 sys 0m0.040s 00:16:19.593 01:46:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.593 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:16:19.593 ************************************ 00:16:19.593 END TEST unittest_event 00:16:19.593 ************************************ 00:16:19.852 01:46:19 -- unit/unittest.sh@233 -- # uname -s 00:16:19.852 01:46:19 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:16:19.852 01:46:19 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:16:19.852 01:46:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.852 01:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.852 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:16:19.852 ************************************ 00:16:19.852 START TEST unittest_ftl 00:16:19.852 ************************************ 00:16:19.852 01:46:19 -- common/autotest_common.sh@1111 -- # unittest_ftl 00:16:19.852 01:46:19 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:16:19.852 00:16:19.852 00:16:19.852 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.852 http://cunit.sourceforge.net/ 00:16:19.852 00:16:19.852 00:16:19.852 Suite: ftl_band_suite 00:16:19.852 Test: test_band_block_offset_from_addr_base ...passed 00:16:19.852 Test: test_band_block_offset_from_addr_offset ...passed 00:16:19.852 Test: test_band_addr_from_block_offset ...passed 00:16:19.852 Test: test_band_set_addr ...passed 00:16:19.852 Test: test_invalidate_addr ...passed 00:16:19.852 Test: test_next_xfer_addr ...passed 00:16:19.852 00:16:19.852 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.852 suites 1 1 n/a 0 0 00:16:19.852 tests 6 6 6 0 0 00:16:19.852 asserts 30356 30356 30356 0 n/a 00:16:19.852 00:16:19.852 Elapsed time = 0.169 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:16:20.116 00:16:20.116 00:16:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.116 http://cunit.sourceforge.net/ 00:16:20.116 00:16:20.116 00:16:20.116 Suite: ftl_bitmap 00:16:20.116 Test: test_ftl_bitmap_create ...[2024-04-24 01:46:20.028025] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:16:20.116 [2024-04-24 01:46:20.028982] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:16:20.116 passed 00:16:20.116 Test: test_ftl_bitmap_get ...passed 00:16:20.116 Test: test_ftl_bitmap_set ...passed 00:16:20.116 Test: test_ftl_bitmap_clear ...passed 00:16:20.116 Test: test_ftl_bitmap_find_first_set ...passed 00:16:20.116 Test: test_ftl_bitmap_find_first_clear ...passed 00:16:20.116 Test: test_ftl_bitmap_count_set ...passed 00:16:20.116 00:16:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.116 suites 1 1 n/a 0 0 00:16:20.116 tests 7 7 7 0 0 00:16:20.116 asserts 137 137 137 0 n/a 00:16:20.116 00:16:20.116 Elapsed time = 0.001 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:16:20.116 00:16:20.116 00:16:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.116 http://cunit.sourceforge.net/ 00:16:20.116 00:16:20.116 00:16:20.116 Suite: ftl_io_suite 00:16:20.116 Test: test_completion ...passed 00:16:20.116 Test: test_multiple_ios ...passed 00:16:20.116 00:16:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.116 suites 1 1 n/a 0 0 00:16:20.116 tests 2 2 2 0 0 00:16:20.116 asserts 47 47 47 0 n/a 00:16:20.116 00:16:20.116 Elapsed time = 0.003 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:16:20.116 00:16:20.116 00:16:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.116 http://cunit.sourceforge.net/ 00:16:20.116 00:16:20.116 00:16:20.116 Suite: ftl_mngt 00:16:20.116 Test: test_next_step ...passed 00:16:20.116 Test: test_continue_step ...passed 00:16:20.116 Test: test_get_func_and_step_cntx_alloc ...passed 00:16:20.116 Test: test_fail_step ...passed 00:16:20.116 Test: test_mngt_call_and_call_rollback ...passed 00:16:20.116 Test: test_nested_process_failure ...passed 00:16:20.116 00:16:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.116 suites 1 1 n/a 0 0 00:16:20.116 tests 6 6 6 0 0 00:16:20.116 asserts 176 176 176 0 n/a 00:16:20.116 00:16:20.116 Elapsed time = 0.002 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:16:20.116 00:16:20.116 00:16:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.116 http://cunit.sourceforge.net/ 00:16:20.116 00:16:20.116 00:16:20.116 Suite: ftl_mempool 00:16:20.116 Test: test_ftl_mempool_create ...passed 00:16:20.116 Test: test_ftl_mempool_get_put ...passed 00:16:20.116 00:16:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.116 suites 1 1 n/a 0 0 00:16:20.116 tests 2 2 2 0 0 00:16:20.116 asserts 36 36 36 0 n/a 00:16:20.116 00:16:20.116 Elapsed time = 0.000 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:16:20.116 00:16:20.116 00:16:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.116 http://cunit.sourceforge.net/ 00:16:20.116 00:16:20.116 00:16:20.116 Suite: ftl_addr64_suite 00:16:20.116 Test: test_addr_cached ...passed 00:16:20.116 00:16:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.116 suites 1 1 n/a 0 0 00:16:20.116 tests 1 1 1 0 0 00:16:20.116 asserts 1536 1536 1536 0 n/a 00:16:20.116 00:16:20.116 Elapsed time = 0.000 seconds 00:16:20.116 01:46:20 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:16:20.374 00:16:20.374 00:16:20.374 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.374 http://cunit.sourceforge.net/ 00:16:20.374 00:16:20.374 00:16:20.374 Suite: ftl_sb 00:16:20.374 Test: test_sb_crc_v2 ...passed 00:16:20.374 Test: test_sb_crc_v3 ...passed 00:16:20.374 Test: test_sb_v3_md_layout ...[2024-04-24 01:46:20.216362] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:16:20.374 [2024-04-24 01:46:20.217068] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:16:20.374 [2024-04-24 01:46:20.217217] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:16:20.374 [2024-04-24 01:46:20.217361] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:16:20.374 [2024-04-24 01:46:20.217486] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:16:20.374 [2024-04-24 01:46:20.217688] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:16:20.374 [2024-04-24 01:46:20.217818] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:16:20.374 [2024-04-24 01:46:20.217946] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:16:20.374 [2024-04-24 01:46:20.218088] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:16:20.374 [2024-04-24 01:46:20.218211] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:16:20.374 [2024-04-24 01:46:20.218360] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:16:20.374 passed 00:16:20.374 Test: test_sb_v5_md_layout ...passed 00:16:20.374 00:16:20.374 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.374 suites 1 1 n/a 0 0 00:16:20.374 tests 4 4 4 0 0 00:16:20.374 asserts 148 148 148 0 n/a 00:16:20.374 00:16:20.374 Elapsed time = 0.002 seconds 00:16:20.374 01:46:20 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:16:20.374 00:16:20.374 00:16:20.374 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.374 http://cunit.sourceforge.net/ 00:16:20.374 00:16:20.374 00:16:20.374 Suite: ftl_layout_upgrade 00:16:20.374 Test: test_l2p_upgrade ...passed 00:16:20.374 00:16:20.374 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.374 suites 1 1 n/a 0 0 00:16:20.374 tests 1 1 1 0 0 00:16:20.374 asserts 140 140 140 0 n/a 00:16:20.374 00:16:20.374 Elapsed time = 0.001 seconds 00:16:20.374 00:16:20.374 real 0m0.553s 00:16:20.374 user 0m0.254s 00:16:20.374 sys 0m0.301s 00:16:20.374 01:46:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.374 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.374 ************************************ 00:16:20.374 END TEST unittest_ftl 00:16:20.374 ************************************ 00:16:20.374 01:46:20 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:16:20.374 01:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:20.374 01:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.374 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.374 ************************************ 00:16:20.374 START TEST unittest_accel 00:16:20.374 ************************************ 00:16:20.374 01:46:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:16:20.374 00:16:20.374 00:16:20.374 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.374 http://cunit.sourceforge.net/ 00:16:20.374 00:16:20.375 00:16:20.375 Suite: accel_sequence 00:16:20.375 Test: test_sequence_fill_copy ...passed 00:16:20.375 Test: test_sequence_abort ...passed 00:16:20.375 Test: test_sequence_append_error ...passed 00:16:20.375 Test: test_sequence_completion_error ...[2024-04-24 01:46:20.409128] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7ff8794fe7c0 00:16:20.375 [2024-04-24 01:46:20.409590] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7ff8794fe7c0 00:16:20.375 [2024-04-24 01:46:20.409658] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7ff8794fe7c0 00:16:20.375 [2024-04-24 01:46:20.410092] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7ff8794fe7c0 00:16:20.375 passed 00:16:20.375 Test: test_sequence_decompress ...passed 00:16:20.375 Test: test_sequence_reverse ...passed 00:16:20.375 Test: test_sequence_copy_elision ...passed 00:16:20.375 Test: test_sequence_accel_buffers ...passed 00:16:20.375 Test: test_sequence_memory_domain ...[2024-04-24 01:46:20.424372] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1736:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:16:20.375 [2024-04-24 01:46:20.424585] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1775:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:16:20.375 passed 00:16:20.375 Test: test_sequence_module_memory_domain ...passed 00:16:20.375 Test: test_sequence_crypto ...passed 00:16:20.375 Test: test_sequence_driver ...[2024-04-24 01:46:20.432482] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1883:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7ff8788b67c0 using driver: ut 00:16:20.375 [2024-04-24 01:46:20.432613] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1947:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7ff8788b67c0 through driver: ut 00:16:20.375 passed 00:16:20.375 Test: test_sequence_same_iovs ...passed 00:16:20.375 Test: test_sequence_crc32 ...passed 00:16:20.375 Suite: accel 00:16:20.375 Test: test_spdk_accel_task_complete ...passed 00:16:20.375 Test: test_get_task ...passed 00:16:20.375 Test: test_spdk_accel_submit_copy ...passed 00:16:20.375 Test: test_spdk_accel_submit_dualcast ...[2024-04-24 01:46:20.441823] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:16:20.375 [2024-04-24 01:46:20.442340] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:16:20.375 passed 00:16:20.375 Test: test_spdk_accel_submit_compare ...passed 00:16:20.375 Test: test_spdk_accel_submit_fill ...passed 00:16:20.375 Test: test_spdk_accel_submit_crc32c ...passed 00:16:20.375 Test: test_spdk_accel_submit_crc32cv ...passed 00:16:20.375 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:16:20.375 Test: test_spdk_accel_submit_xor ...passed 00:16:20.375 Test: test_spdk_accel_module_find_by_name ...passed 00:16:20.375 Test: test_spdk_accel_module_register ...passed 00:16:20.375 00:16:20.375 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.375 suites 2 2 n/a 0 0 00:16:20.375 tests 26 26 26 0 0 00:16:20.375 asserts 831 831 831 0 n/a 00:16:20.375 00:16:20.375 Elapsed time = 0.048 seconds 00:16:20.633 00:16:20.633 real 0m0.098s 00:16:20.633 user 0m0.065s 00:16:20.633 sys 0m0.033s 00:16:20.633 01:46:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.633 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.633 ************************************ 00:16:20.633 END TEST unittest_accel 00:16:20.633 ************************************ 00:16:20.633 01:46:20 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:16:20.633 01:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:20.633 01:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.633 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.633 ************************************ 00:16:20.633 START TEST unittest_ioat 00:16:20.633 ************************************ 00:16:20.633 01:46:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:16:20.633 00:16:20.633 00:16:20.633 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.633 http://cunit.sourceforge.net/ 00:16:20.633 00:16:20.633 00:16:20.633 Suite: ioat 00:16:20.633 Test: ioat_state_check ...passed 00:16:20.633 00:16:20.633 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.633 suites 1 1 n/a 0 0 00:16:20.633 tests 1 1 1 0 0 00:16:20.633 asserts 32 32 32 0 n/a 00:16:20.633 00:16:20.633 Elapsed time = 0.000 seconds 00:16:20.633 00:16:20.633 real 0m0.039s 00:16:20.633 user 0m0.004s 00:16:20.633 sys 0m0.035s 00:16:20.633 01:46:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.633 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.633 ************************************ 00:16:20.633 END TEST unittest_ioat 00:16:20.633 ************************************ 00:16:20.633 01:46:20 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:20.633 01:46:20 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:16:20.633 01:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:20.633 01:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.633 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.893 ************************************ 00:16:20.893 START TEST unittest_idxd_user 00:16:20.893 ************************************ 00:16:20.893 01:46:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:16:20.893 00:16:20.893 00:16:20.893 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.893 http://cunit.sourceforge.net/ 00:16:20.893 00:16:20.893 00:16:20.893 Suite: idxd_user 00:16:20.893 Test: test_idxd_wait_cmd ...[2024-04-24 01:46:20.743911] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:16:20.893 [2024-04-24 01:46:20.744214] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:16:20.893 passed 00:16:20.893 Test: test_idxd_reset_dev ...[2024-04-24 01:46:20.744358] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:16:20.893 passed 00:16:20.893 Test: test_idxd_group_config ...passed 00:16:20.893 Test: test_idxd_wq_config ...[2024-04-24 01:46:20.744401] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:16:20.893 passed 00:16:20.893 00:16:20.893 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.893 suites 1 1 n/a 0 0 00:16:20.893 tests 4 4 4 0 0 00:16:20.893 asserts 20 20 20 0 n/a 00:16:20.893 00:16:20.893 Elapsed time = 0.001 seconds 00:16:20.893 00:16:20.893 real 0m0.034s 00:16:20.893 user 0m0.016s 00:16:20.893 sys 0m0.018s 00:16:20.893 01:46:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.893 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.893 ************************************ 00:16:20.893 END TEST unittest_idxd_user 00:16:20.893 ************************************ 00:16:20.893 01:46:20 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:16:20.893 01:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:20.893 01:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.893 01:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.893 ************************************ 00:16:20.893 START TEST unittest_iscsi 00:16:20.893 ************************************ 00:16:20.893 01:46:20 -- common/autotest_common.sh@1111 -- # unittest_iscsi 00:16:20.893 01:46:20 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:16:20.893 00:16:20.893 00:16:20.893 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.893 http://cunit.sourceforge.net/ 00:16:20.893 00:16:20.893 00:16:20.893 Suite: conn_suite 00:16:20.893 Test: read_task_split_in_order_case ...passed 00:16:20.893 Test: read_task_split_reverse_order_case ...passed 00:16:20.893 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:16:20.893 Test: process_non_read_task_completion_test ...passed 00:16:20.893 Test: free_tasks_on_connection ...passed 00:16:20.893 Test: free_tasks_with_queued_datain ...passed 00:16:20.893 Test: abort_queued_datain_task_test ...passed 00:16:20.893 Test: abort_queued_datain_tasks_test ...passed 00:16:20.893 00:16:20.893 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.893 suites 1 1 n/a 0 0 00:16:20.893 tests 8 8 8 0 0 00:16:20.893 asserts 230 230 230 0 n/a 00:16:20.893 00:16:20.893 Elapsed time = 0.000 seconds 00:16:20.893 01:46:20 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:16:20.893 00:16:20.893 00:16:20.893 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.893 http://cunit.sourceforge.net/ 00:16:20.893 00:16:20.893 00:16:20.893 Suite: iscsi_suite 00:16:20.893 Test: param_negotiation_test ...passed 00:16:20.893 Test: list_negotiation_test ...passed 00:16:20.893 Test: parse_valid_test ...passed 00:16:20.893 Test: parse_invalid_test ...[2024-04-24 01:46:20.936493] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:16:20.893 [2024-04-24 01:46:20.936900] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:16:20.893 [2024-04-24 01:46:20.936975] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:16:20.893 [2024-04-24 01:46:20.937084] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:16:20.893 [2024-04-24 01:46:20.937303] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:16:20.893 [2024-04-24 01:46:20.937393] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:16:20.893 [2024-04-24 01:46:20.937596] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:16:20.893 passed 00:16:20.893 00:16:20.893 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.893 suites 1 1 n/a 0 0 00:16:20.893 tests 4 4 4 0 0 00:16:20.893 asserts 161 161 161 0 n/a 00:16:20.893 00:16:20.893 Elapsed time = 0.006 seconds 00:16:20.893 01:46:20 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:16:20.893 00:16:20.893 00:16:20.893 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.893 http://cunit.sourceforge.net/ 00:16:20.893 00:16:20.893 00:16:20.893 Suite: iscsi_target_node_suite 00:16:20.893 Test: add_lun_test_cases ...[2024-04-24 01:46:20.976654] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:16:20.893 [2024-04-24 01:46:20.977007] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:16:20.893 [2024-04-24 01:46:20.977106] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:16:20.893 [2024-04-24 01:46:20.977161] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:16:20.893 [2024-04-24 01:46:20.977207] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:16:20.893 passed 00:16:20.894 Test: allow_any_allowed ...passed 00:16:20.894 Test: allow_ipv6_allowed ...passed 00:16:20.894 Test: allow_ipv6_denied ...passed 00:16:20.894 Test: allow_ipv6_invalid ...passed 00:16:20.894 Test: allow_ipv4_allowed ...passed 00:16:20.894 Test: allow_ipv4_denied ...passed 00:16:20.894 Test: allow_ipv4_invalid ...passed 00:16:20.894 Test: node_access_allowed ...passed 00:16:20.894 Test: node_access_denied_by_empty_netmask ...passed 00:16:20.894 Test: node_access_multi_initiator_groups_cases ...passed 00:16:20.894 Test: allow_iscsi_name_multi_maps_case ...passed 00:16:20.894 Test: chap_param_test_cases ...[2024-04-24 01:46:20.977659] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:16:20.894 passed 00:16:20.894 00:16:20.894 [2024-04-24 01:46:20.977706] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:16:20.894 [2024-04-24 01:46:20.977772] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:16:20.894 [2024-04-24 01:46:20.977812] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:16:20.894 [2024-04-24 01:46:20.977860] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:16:20.894 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.894 suites 1 1 n/a 0 0 00:16:20.894 tests 13 13 13 0 0 00:16:20.894 asserts 50 50 50 0 n/a 00:16:20.894 00:16:21.153 Elapsed time = 0.001 seconds 00:16:21.153 01:46:20 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:16:21.153 00:16:21.153 00:16:21.153 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.153 http://cunit.sourceforge.net/ 00:16:21.153 00:16:21.153 00:16:21.153 Suite: iscsi_suite 00:16:21.153 Test: op_login_check_target_test ...passed 00:16:21.153 Test: op_login_session_normal_test ...[2024-04-24 01:46:21.025084] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:16:21.153 [2024-04-24 01:46:21.025467] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:16:21.153 [2024-04-24 01:46:21.025524] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:16:21.153 [2024-04-24 01:46:21.025574] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:16:21.153 [2024-04-24 01:46:21.025650] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:16:21.153 [2024-04-24 01:46:21.025773] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:16:21.153 [2024-04-24 01:46:21.025888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:16:21.153 passed 00:16:21.153 Test: maxburstlength_test ...[2024-04-24 01:46:21.025960] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:16:21.153 [2024-04-24 01:46:21.026237] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:16:21.153 [2024-04-24 01:46:21.026295] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:16:21.153 passed 00:16:21.153 Test: underflow_for_read_transfer_test ...passed 00:16:21.153 Test: underflow_for_zero_read_transfer_test ...passed 00:16:21.153 Test: underflow_for_request_sense_test ...passed 00:16:21.153 Test: underflow_for_check_condition_test ...passed 00:16:21.153 Test: add_transfer_task_test ...passed 00:16:21.153 Test: get_transfer_task_test ...passed 00:16:21.153 Test: del_transfer_task_test ...passed 00:16:21.153 Test: clear_all_transfer_tasks_test ...passed 00:16:21.153 Test: build_iovs_test ...passed 00:16:21.153 Test: build_iovs_with_md_test ...passed 00:16:21.153 Test: pdu_hdr_op_login_test ...[2024-04-24 01:46:21.027899] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:16:21.153 [2024-04-24 01:46:21.028017] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:16:21.153 [2024-04-24 01:46:21.028264] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_text_test ...[2024-04-24 01:46:21.028378] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:16:21.153 [2024-04-24 01:46:21.028480] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:16:21.153 [2024-04-24 01:46:21.028537] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_logout_test ...[2024-04-24 01:46:21.028642] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_scsi_test ...[2024-04-24 01:46:21.028831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:16:21.153 [2024-04-24 01:46:21.028875] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:16:21.153 [2024-04-24 01:46:21.028939] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:16:21.153 [2024-04-24 01:46:21.029038] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:16:21.153 [2024-04-24 01:46:21.029144] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:16:21.153 [2024-04-24 01:46:21.029342] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-24 01:46:21.029469] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:16:21.153 [2024-04-24 01:46:21.029566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_nopout_test ...[2024-04-24 01:46:21.029810] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:16:21.153 [2024-04-24 01:46:21.029917] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:16:21.153 [2024-04-24 01:46:21.029962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:16:21.153 [2024-04-24 01:46:21.030002] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:16:21.153 passed 00:16:21.153 Test: pdu_hdr_op_data_test ...[2024-04-24 01:46:21.030052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:16:21.153 [2024-04-24 01:46:21.030135] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:16:21.153 [2024-04-24 01:46:21.030206] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:16:21.153 [2024-04-24 01:46:21.030274] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:16:21.153 [2024-04-24 01:46:21.030344] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:16:21.153 [2024-04-24 01:46:21.030454] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:16:21.153 [2024-04-24 01:46:21.030510] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:16:21.153 passed 00:16:21.153 Test: empty_text_with_cbit_test ...passed 00:16:21.153 Test: pdu_payload_read_test ...[2024-04-24 01:46:21.032706] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:16:21.153 passed 00:16:21.153 Test: data_out_pdu_sequence_test ...passed 00:16:21.153 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:16:21.153 00:16:21.153 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.153 suites 1 1 n/a 0 0 00:16:21.153 tests 24 24 24 0 0 00:16:21.153 asserts 150253 150253 150253 0 n/a 00:16:21.153 00:16:21.153 Elapsed time = 0.017 seconds 00:16:21.153 01:46:21 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:16:21.153 00:16:21.153 00:16:21.153 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.153 http://cunit.sourceforge.net/ 00:16:21.153 00:16:21.153 00:16:21.153 Suite: init_grp_suite 00:16:21.153 Test: create_initiator_group_success_case ...passed 00:16:21.153 Test: find_initiator_group_success_case ...passed 00:16:21.153 Test: register_initiator_group_twice_case ...passed 00:16:21.153 Test: add_initiator_name_success_case ...passed 00:16:21.153 Test: add_initiator_name_fail_case ...[2024-04-24 01:46:21.075043] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:16:21.153 passed 00:16:21.153 Test: delete_all_initiator_names_success_case ...passed 00:16:21.153 Test: add_netmask_success_case ...passed 00:16:21.153 Test: add_netmask_fail_case ...[2024-04-24 01:46:21.075542] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:16:21.153 passed 00:16:21.153 Test: delete_all_netmasks_success_case ...passed 00:16:21.153 Test: initiator_name_overwrite_all_to_any_case ...passed 00:16:21.153 Test: netmask_overwrite_all_to_any_case ...passed 00:16:21.153 Test: add_delete_initiator_names_case ...passed 00:16:21.153 Test: add_duplicated_initiator_names_case ...passed 00:16:21.153 Test: delete_nonexisting_initiator_names_case ...passed 00:16:21.153 Test: add_delete_netmasks_case ...passed 00:16:21.154 Test: add_duplicated_netmasks_case ...passed 00:16:21.154 Test: delete_nonexisting_netmasks_case ...passed 00:16:21.154 00:16:21.154 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.154 suites 1 1 n/a 0 0 00:16:21.154 tests 17 17 17 0 0 00:16:21.154 asserts 108 108 108 0 n/a 00:16:21.154 00:16:21.154 Elapsed time = 0.001 seconds 00:16:21.154 01:46:21 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:16:21.154 00:16:21.154 00:16:21.154 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.154 http://cunit.sourceforge.net/ 00:16:21.154 00:16:21.154 00:16:21.154 Suite: portal_grp_suite 00:16:21.154 Test: portal_create_ipv4_normal_case ...passed 00:16:21.154 Test: portal_create_ipv6_normal_case ...passed 00:16:21.154 Test: portal_create_ipv4_wildcard_case ...passed 00:16:21.154 Test: portal_create_ipv6_wildcard_case ...passed 00:16:21.154 Test: portal_create_twice_case ...passed 00:16:21.154 Test: portal_grp_register_unregister_case ...passed 00:16:21.154 Test: portal_grp_register_twice_case ...passed 00:16:21.154 Test: portal_grp_add_delete_case ...[2024-04-24 01:46:21.119284] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:16:21.154 passed 00:16:21.154 Test: portal_grp_add_delete_twice_case ...passed 00:16:21.154 00:16:21.154 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.154 suites 1 1 n/a 0 0 00:16:21.154 tests 9 9 9 0 0 00:16:21.154 asserts 44 44 44 0 n/a 00:16:21.154 00:16:21.154 Elapsed time = 0.004 seconds 00:16:21.154 00:16:21.154 real 0m0.269s 00:16:21.154 user 0m0.124s 00:16:21.154 sys 0m0.148s 00:16:21.154 01:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.154 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.154 ************************************ 00:16:21.154 END TEST unittest_iscsi 00:16:21.154 ************************************ 00:16:21.154 01:46:21 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:16:21.154 01:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:21.154 01:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.154 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.412 ************************************ 00:16:21.412 START TEST unittest_json 00:16:21.412 ************************************ 00:16:21.412 01:46:21 -- common/autotest_common.sh@1111 -- # unittest_json 00:16:21.412 01:46:21 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:16:21.412 00:16:21.412 00:16:21.412 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.412 http://cunit.sourceforge.net/ 00:16:21.412 00:16:21.412 00:16:21.412 Suite: json 00:16:21.412 Test: test_parse_literal ...passed 00:16:21.412 Test: test_parse_string_simple ...passed 00:16:21.412 Test: test_parse_string_control_chars ...passed 00:16:21.412 Test: test_parse_string_utf8 ...passed 00:16:21.412 Test: test_parse_string_escapes_twochar ...passed 00:16:21.412 Test: test_parse_string_escapes_unicode ...passed 00:16:21.412 Test: test_parse_number ...passed 00:16:21.412 Test: test_parse_array ...passed 00:16:21.412 Test: test_parse_object ...passed 00:16:21.412 Test: test_parse_nesting ...passed 00:16:21.412 Test: test_parse_comment ...passed 00:16:21.412 00:16:21.412 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.412 suites 1 1 n/a 0 0 00:16:21.412 tests 11 11 11 0 0 00:16:21.412 asserts 1516 1516 1516 0 n/a 00:16:21.412 00:16:21.412 Elapsed time = 0.002 seconds 00:16:21.412 01:46:21 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:16:21.412 00:16:21.412 00:16:21.412 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.412 http://cunit.sourceforge.net/ 00:16:21.412 00:16:21.412 00:16:21.412 Suite: json 00:16:21.412 Test: test_strequal ...passed 00:16:21.412 Test: test_num_to_uint16 ...passed 00:16:21.412 Test: test_num_to_int32 ...passed 00:16:21.412 Test: test_num_to_uint64 ...passed 00:16:21.412 Test: test_decode_object ...passed 00:16:21.412 Test: test_decode_array ...passed 00:16:21.412 Test: test_decode_bool ...passed 00:16:21.412 Test: test_decode_uint16 ...passed 00:16:21.412 Test: test_decode_int32 ...passed 00:16:21.412 Test: test_decode_uint32 ...passed 00:16:21.412 Test: test_decode_uint64 ...passed 00:16:21.412 Test: test_decode_string ...passed 00:16:21.412 Test: test_decode_uuid ...passed 00:16:21.412 Test: test_find ...passed 00:16:21.412 Test: test_find_array ...passed 00:16:21.413 Test: test_iterating ...passed 00:16:21.413 Test: test_free_object ...passed 00:16:21.413 00:16:21.413 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.413 suites 1 1 n/a 0 0 00:16:21.413 tests 17 17 17 0 0 00:16:21.413 asserts 236 236 236 0 n/a 00:16:21.413 00:16:21.413 Elapsed time = 0.001 seconds 00:16:21.413 01:46:21 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:16:21.413 00:16:21.413 00:16:21.413 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.413 http://cunit.sourceforge.net/ 00:16:21.413 00:16:21.413 00:16:21.413 Suite: json 00:16:21.413 Test: test_write_literal ...passed 00:16:21.413 Test: test_write_string_simple ...passed 00:16:21.413 Test: test_write_string_escapes ...passed 00:16:21.413 Test: test_write_string_utf16le ...passed 00:16:21.413 Test: test_write_number_int32 ...passed 00:16:21.413 Test: test_write_number_uint32 ...passed 00:16:21.413 Test: test_write_number_uint128 ...passed 00:16:21.413 Test: test_write_string_number_uint128 ...passed 00:16:21.413 Test: test_write_number_int64 ...passed 00:16:21.413 Test: test_write_number_uint64 ...passed 00:16:21.413 Test: test_write_number_double ...passed 00:16:21.413 Test: test_write_uuid ...passed 00:16:21.413 Test: test_write_array ...passed 00:16:21.413 Test: test_write_object ...passed 00:16:21.413 Test: test_write_nesting ...passed 00:16:21.413 Test: test_write_val ...passed 00:16:21.413 00:16:21.413 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.413 suites 1 1 n/a 0 0 00:16:21.413 tests 16 16 16 0 0 00:16:21.413 asserts 918 918 918 0 n/a 00:16:21.413 00:16:21.413 Elapsed time = 0.005 seconds 00:16:21.413 01:46:21 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:16:21.413 00:16:21.413 00:16:21.413 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.413 http://cunit.sourceforge.net/ 00:16:21.413 00:16:21.413 00:16:21.413 Suite: jsonrpc 00:16:21.413 Test: test_parse_request ...passed 00:16:21.413 Test: test_parse_request_streaming ...passed 00:16:21.413 00:16:21.413 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.413 suites 1 1 n/a 0 0 00:16:21.413 tests 2 2 2 0 0 00:16:21.413 asserts 289 289 289 0 n/a 00:16:21.413 00:16:21.413 Elapsed time = 0.004 seconds 00:16:21.413 00:16:21.413 real 0m0.165s 00:16:21.413 user 0m0.077s 00:16:21.413 sys 0m0.090s 00:16:21.413 01:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.413 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.413 ************************************ 00:16:21.413 END TEST unittest_json 00:16:21.413 ************************************ 00:16:21.413 01:46:21 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:16:21.413 01:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:21.413 01:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.413 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.671 ************************************ 00:16:21.671 START TEST unittest_rpc 00:16:21.671 ************************************ 00:16:21.671 01:46:21 -- common/autotest_common.sh@1111 -- # unittest_rpc 00:16:21.671 01:46:21 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:16:21.671 00:16:21.671 00:16:21.671 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.671 http://cunit.sourceforge.net/ 00:16:21.671 00:16:21.671 00:16:21.671 Suite: rpc 00:16:21.671 Test: test_jsonrpc_handler ...passed 00:16:21.671 Test: test_spdk_rpc_is_method_allowed ...passed 00:16:21.671 Test: test_rpc_get_methods ...[2024-04-24 01:46:21.534776] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:16:21.671 passed 00:16:21.671 Test: test_rpc_spdk_get_version ...passed 00:16:21.671 Test: test_spdk_rpc_listen_close ...passed 00:16:21.671 Test: test_rpc_run_multiple_servers ...passed 00:16:21.671 00:16:21.671 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.671 suites 1 1 n/a 0 0 00:16:21.671 tests 6 6 6 0 0 00:16:21.671 asserts 23 23 23 0 n/a 00:16:21.671 00:16:21.671 Elapsed time = 0.001 seconds 00:16:21.671 00:16:21.671 real 0m0.037s 00:16:21.671 user 0m0.022s 00:16:21.671 sys 0m0.016s 00:16:21.671 01:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.671 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.671 ************************************ 00:16:21.671 END TEST unittest_rpc 00:16:21.671 ************************************ 00:16:21.671 01:46:21 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:16:21.671 01:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:21.671 01:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.671 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.671 ************************************ 00:16:21.671 START TEST unittest_notify 00:16:21.671 ************************************ 00:16:21.671 01:46:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:16:21.671 00:16:21.671 00:16:21.671 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.671 http://cunit.sourceforge.net/ 00:16:21.671 00:16:21.671 00:16:21.671 Suite: app_suite 00:16:21.671 Test: notify ...passed 00:16:21.671 00:16:21.671 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.671 suites 1 1 n/a 0 0 00:16:21.671 tests 1 1 1 0 0 00:16:21.671 asserts 13 13 13 0 n/a 00:16:21.671 00:16:21.671 Elapsed time = 0.000 seconds 00:16:21.671 00:16:21.671 real 0m0.032s 00:16:21.671 user 0m0.020s 00:16:21.671 sys 0m0.012s 00:16:21.671 01:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.671 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.671 ************************************ 00:16:21.671 END TEST unittest_notify 00:16:21.671 ************************************ 00:16:21.671 01:46:21 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:16:21.671 01:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:21.671 01:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.671 01:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.931 ************************************ 00:16:21.931 START TEST unittest_nvme 00:16:21.931 ************************************ 00:16:21.931 01:46:21 -- common/autotest_common.sh@1111 -- # unittest_nvme 00:16:21.931 01:46:21 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:16:21.931 00:16:21.931 00:16:21.931 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.931 http://cunit.sourceforge.net/ 00:16:21.931 00:16:21.931 00:16:21.931 Suite: nvme 00:16:21.931 Test: test_opc_data_transfer ...passed 00:16:21.931 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:16:21.931 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:16:21.931 Test: test_trid_parse_and_compare ...[2024-04-24 01:46:21.813600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1171:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:16:21.931 [2024-04-24 01:46:21.813932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:16:21.931 [2024-04-24 01:46:21.814046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1183:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:16:21.931 [2024-04-24 01:46:21.814096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:16:21.931 [2024-04-24 01:46:21.814136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1194:parse_next_key: *ERROR*: Key without value 00:16:21.931 [2024-04-24 01:46:21.814236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:16:21.931 passed 00:16:21.931 Test: test_trid_trtype_str ...passed 00:16:21.931 Test: test_trid_adrfam_str ...passed 00:16:21.931 Test: test_nvme_ctrlr_probe ...passed 00:16:21.931 Test: test_spdk_nvme_probe ...[2024-04-24 01:46:21.814449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:16:21.931 [2024-04-24 01:46:21.814557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:16:21.931 [2024-04-24 01:46:21.814600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:16:21.931 [2024-04-24 01:46:21.814707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:16:21.931 [2024-04-24 01:46:21.814756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:16:21.931 passed 00:16:21.931 Test: test_spdk_nvme_connect ...[2024-04-24 01:46:21.814860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 993:spdk_nvme_connect: *ERROR*: No transport ID specified 00:16:21.931 [2024-04-24 01:46:21.815224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:16:21.931 passed 00:16:21.931 Test: test_nvme_ctrlr_probe_internal ...[2024-04-24 01:46:21.815300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1004:spdk_nvme_connect: *ERROR*: Create probe context failed 00:16:21.931 passed 00:16:21.931 Test: test_nvme_init_controllers ...[2024-04-24 01:46:21.815431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:16:21.931 [2024-04-24 01:46:21.815479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.931 [2024-04-24 01:46:21.815564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:16:21.931 passed 00:16:21.931 Test: test_nvme_driver_init ...[2024-04-24 01:46:21.815682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:16:21.931 [2024-04-24 01:46:21.815727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:16:21.931 [2024-04-24 01:46:21.924877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:16:21.931 passed 00:16:21.931 Test: test_spdk_nvme_detach ...passed 00:16:21.931 Test: test_nvme_completion_poll_cb ...passed 00:16:21.931 Test: test_nvme_user_copy_cmd_complete ...[2024-04-24 01:46:21.925109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:16:21.931 passed 00:16:21.931 Test: test_nvme_allocate_request_null ...passed 00:16:21.931 Test: test_nvme_allocate_request ...passed 00:16:21.931 Test: test_nvme_free_request ...passed 00:16:21.931 Test: test_nvme_allocate_request_user_copy ...passed 00:16:21.931 Test: test_nvme_robust_mutex_init_shared ...passed 00:16:21.931 Test: test_nvme_request_check_timeout ...passed 00:16:21.931 Test: test_nvme_wait_for_completion ...passed 00:16:21.931 Test: test_spdk_nvme_parse_func ...passed 00:16:21.931 Test: test_spdk_nvme_detach_async ...passed 00:16:21.931 Test: test_nvme_parse_addr ...[2024-04-24 01:46:21.926161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1581:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:16:21.931 passed 00:16:21.931 00:16:21.931 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.931 suites 1 1 n/a 0 0 00:16:21.931 tests 25 25 25 0 0 00:16:21.931 asserts 326 326 326 0 n/a 00:16:21.931 00:16:21.931 Elapsed time = 0.006 seconds 00:16:21.931 01:46:21 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:16:21.931 00:16:21.931 00:16:21.931 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.931 http://cunit.sourceforge.net/ 00:16:21.931 00:16:21.931 00:16:21.931 Suite: nvme_ctrlr 00:16:21.931 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-24 01:46:21.964731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.931 passed 00:16:21.931 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-24 01:46:21.966577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.931 passed 00:16:21.931 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-24 01:46:21.967844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.931 passed 00:16:21.931 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-24 01:46:21.969077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.931 passed 00:16:21.931 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-24 01:46:21.970331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.931 [2024-04-24 01:46:21.971496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 01:46:21.972698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 01:46:21.973858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:16:21.932 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-24 01:46:21.976167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 [2024-04-24 01:46:21.978406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 01:46:21.979590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:16:21.932 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-24 01:46:21.981994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 [2024-04-24 01:46:21.983175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 01:46:21.985500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:16:21.932 Test: test_nvme_ctrlr_init_delay ...[2024-04-24 01:46:21.987913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 passed 00:16:21.932 Test: test_alloc_io_qpair_rr_1 ...[2024-04-24 01:46:21.989202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 [2024-04-24 01:46:21.989384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:16:21.932 [2024-04-24 01:46:21.989624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:16:21.932 [2024-04-24 01:46:21.989710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:16:21.932 [2024-04-24 01:46:21.989764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:16:21.932 passed 00:16:21.932 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:16:21.932 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:16:21.932 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-24 01:46:21.989919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 passed 00:16:21.932 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-24 01:46:21.990142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:21.932 [2024-04-24 01:46:21.990299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:16:21.932 passed 00:16:21.932 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-24 01:46:21.990672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4857:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:16:21.932 [2024-04-24 01:46:21.990900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:16:21.932 [2024-04-24 01:46:21.991034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4934:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:16:21.932 [2024-04-24 01:46:21.991151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:16:21.932 passed 00:16:21.932 Test: test_nvme_ctrlr_fail ...[2024-04-24 01:46:21.991245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:16:21.932 passed 00:16:21.932 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:16:21.932 Test: test_nvme_ctrlr_set_supported_features ...passed 00:16:21.932 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:16:21.932 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-24 01:46:21.991570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:16:22.539 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:16:22.539 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:16:22.539 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-24 01:46:22.333367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-24 01:46:22.340405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-24 01:46:22.341629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 [2024-04-24 01:46:22.341693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2882:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:16:22.539 passed 00:16:22.539 Test: test_alloc_io_qpair_fail ...[2024-04-24 01:46:22.342832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_add_remove_process ...passed 00:16:22.539 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-04-24 01:46:22.342950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 510:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_set_state ...passed 00:16:22.539 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-24 01:46:22.343091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:16:22.539 [2024-04-24 01:46:22.343140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-24 01:46:22.366483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-24 01:46:22.400074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_reset ...[2024-04-24 01:46:22.401548] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_aer_callback ...[2024-04-24 01:46:22.401865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-24 01:46:22.403207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:16:22.539 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:16:22.539 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-24 01:46:22.404820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:16:22.539 Test: test_nvme_ctrlr_ana_resize ...[2024-04-24 01:46:22.406104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:16:22.539 Test: test_nvme_transport_ctrlr_ready ...[2024-04-24 01:46:22.407566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:16:22.539 passed 00:16:22.539 Test: test_nvme_ctrlr_disable ...[2024-04-24 01:46:22.407610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4079:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:16:22.539 [2024-04-24 01:46:22.407658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:16:22.539 passed 00:16:22.539 00:16:22.539 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.539 suites 1 1 n/a 0 0 00:16:22.539 tests 43 43 43 0 0 00:16:22.539 asserts 10418 10418 10418 0 n/a 00:16:22.539 00:16:22.539 Elapsed time = 0.404 seconds 00:16:22.539 01:46:22 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:16:22.539 00:16:22.539 00:16:22.539 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.539 http://cunit.sourceforge.net/ 00:16:22.539 00:16:22.539 00:16:22.539 Suite: nvme_ctrlr_cmd 00:16:22.539 Test: test_get_log_pages ...passed 00:16:22.539 Test: test_set_feature_cmd ...passed 00:16:22.539 Test: test_set_feature_ns_cmd ...passed 00:16:22.539 Test: test_get_feature_cmd ...passed 00:16:22.539 Test: test_get_feature_ns_cmd ...passed 00:16:22.539 Test: test_abort_cmd ...passed 00:16:22.539 Test: test_set_host_id_cmds ...[2024-04-24 01:46:22.461442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:16:22.539 passed 00:16:22.539 Test: test_io_cmd_raw_no_payload_build ...passed 00:16:22.539 Test: test_io_raw_cmd ...passed 00:16:22.540 Test: test_io_raw_cmd_with_md ...passed 00:16:22.540 Test: test_namespace_attach ...passed 00:16:22.540 Test: test_namespace_detach ...passed 00:16:22.540 Test: test_namespace_create ...passed 00:16:22.540 Test: test_namespace_delete ...passed 00:16:22.540 Test: test_doorbell_buffer_config ...passed 00:16:22.540 Test: test_format_nvme ...passed 00:16:22.540 Test: test_fw_commit ...passed 00:16:22.540 Test: test_fw_image_download ...passed 00:16:22.540 Test: test_sanitize ...passed 00:16:22.540 Test: test_directive ...passed 00:16:22.540 Test: test_nvme_request_add_abort ...passed 00:16:22.540 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:16:22.540 Test: test_nvme_ctrlr_cmd_identify ...passed 00:16:22.540 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:16:22.540 00:16:22.540 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.540 suites 1 1 n/a 0 0 00:16:22.540 tests 24 24 24 0 0 00:16:22.540 asserts 198 198 198 0 n/a 00:16:22.540 00:16:22.540 Elapsed time = 0.001 seconds 00:16:22.540 01:46:22 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:16:22.540 00:16:22.540 00:16:22.540 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.540 http://cunit.sourceforge.net/ 00:16:22.540 00:16:22.540 00:16:22.540 Suite: nvme_ctrlr_cmd 00:16:22.540 Test: test_geometry_cmd ...passed 00:16:22.540 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:16:22.540 00:16:22.540 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.540 suites 1 1 n/a 0 0 00:16:22.540 tests 2 2 2 0 0 00:16:22.540 asserts 7 7 7 0 n/a 00:16:22.540 00:16:22.540 Elapsed time = 0.000 seconds 00:16:22.540 01:46:22 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:16:22.540 00:16:22.540 00:16:22.540 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.540 http://cunit.sourceforge.net/ 00:16:22.540 00:16:22.540 00:16:22.540 Suite: nvme 00:16:22.540 Test: test_nvme_ns_construct ...passed 00:16:22.540 Test: test_nvme_ns_uuid ...passed 00:16:22.540 Test: test_nvme_ns_csi ...passed 00:16:22.540 Test: test_nvme_ns_data ...passed 00:16:22.540 Test: test_nvme_ns_set_identify_data ...passed 00:16:22.540 Test: test_spdk_nvme_ns_get_values ...passed 00:16:22.540 Test: test_spdk_nvme_ns_is_active ...passed 00:16:22.540 Test: spdk_nvme_ns_supports ...passed 00:16:22.540 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:16:22.540 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:16:22.540 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:16:22.540 Test: test_nvme_ns_find_id_desc ...passed 00:16:22.540 00:16:22.540 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.540 suites 1 1 n/a 0 0 00:16:22.540 tests 12 12 12 0 0 00:16:22.540 asserts 83 83 83 0 n/a 00:16:22.540 00:16:22.540 Elapsed time = 0.001 seconds 00:16:22.540 01:46:22 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:16:22.540 00:16:22.540 00:16:22.540 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.540 http://cunit.sourceforge.net/ 00:16:22.540 00:16:22.540 00:16:22.540 Suite: nvme_ns_cmd 00:16:22.540 Test: split_test ...passed 00:16:22.540 Test: split_test2 ...passed 00:16:22.540 Test: split_test3 ...passed 00:16:22.540 Test: split_test4 ...passed 00:16:22.540 Test: test_nvme_ns_cmd_flush ...passed 00:16:22.540 Test: test_nvme_ns_cmd_dataset_management ...passed 00:16:22.540 Test: test_nvme_ns_cmd_copy ...passed 00:16:22.540 Test: test_io_flags ...[2024-04-24 01:46:22.567287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:16:22.540 passed 00:16:22.540 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:16:22.540 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:16:22.540 Test: test_nvme_ns_cmd_reservation_register ...passed 00:16:22.540 Test: test_nvme_ns_cmd_reservation_release ...passed 00:16:22.540 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:16:22.540 Test: test_nvme_ns_cmd_reservation_report ...passed 00:16:22.540 Test: test_cmd_child_request ...passed 00:16:22.540 Test: test_nvme_ns_cmd_readv ...passed 00:16:22.540 Test: test_nvme_ns_cmd_read_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_writev ...[2024-04-24 01:46:22.568263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:16:22.540 passed 00:16:22.540 Test: test_nvme_ns_cmd_write_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_comparev ...passed 00:16:22.540 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:16:22.540 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:16:22.540 Test: test_nvme_ns_cmd_setup_request ...passed 00:16:22.540 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:16:22.540 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-24 01:46:22.569660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:16:22.540 passed 00:16:22.540 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-24 01:46:22.569734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:16:22.540 passed 00:16:22.540 Test: test_nvme_ns_cmd_verify ...passed 00:16:22.540 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:16:22.540 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:16:22.540 00:16:22.540 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.540 suites 1 1 n/a 0 0 00:16:22.540 tests 32 32 32 0 0 00:16:22.540 asserts 550 550 550 0 n/a 00:16:22.540 00:16:22.540 Elapsed time = 0.003 seconds 00:16:22.540 01:46:22 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:16:22.540 00:16:22.540 00:16:22.540 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.540 http://cunit.sourceforge.net/ 00:16:22.540 00:16:22.540 00:16:22.540 Suite: nvme_ns_cmd 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:16:22.540 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:16:22.540 00:16:22.540 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.540 suites 1 1 n/a 0 0 00:16:22.540 tests 12 12 12 0 0 00:16:22.540 asserts 123 123 123 0 n/a 00:16:22.540 00:16:22.541 Elapsed time = 0.001 seconds 00:16:22.799 01:46:22 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:16:22.799 00:16:22.799 00:16:22.799 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.799 http://cunit.sourceforge.net/ 00:16:22.799 00:16:22.799 00:16:22.799 Suite: nvme_qpair 00:16:22.799 Test: test3 ...passed 00:16:22.799 Test: test_ctrlr_failed ...passed 00:16:22.799 Test: struct_packing ...passed 00:16:22.799 Test: test_nvme_qpair_process_completions ...passed 00:16:22.799 Test: test_nvme_completion_is_retry ...passed 00:16:22.799 Test: test_get_status_string ...passed 00:16:22.799 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:16:22.799 Test: test_nvme_qpair_submit_request ...passed 00:16:22.799 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:16:22.799 Test: test_nvme_qpair_manual_complete_request ...passed 00:16:22.799 Test: test_nvme_qpair_init_deinit ...passed 00:16:22.799 Test: test_nvme_get_sgl_print_info ...passed 00:16:22.799 00:16:22.799 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.799 suites 1 1 n/a 0 0 00:16:22.799 tests 12 12 12 0 0 00:16:22.799 asserts 154 154 154 0 n/a 00:16:22.799 00:16:22.799 Elapsed time = 0.001 seconds 00:16:22.799 [2024-04-24 01:46:22.643042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:22.799 [2024-04-24 01:46:22.643405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:22.799 [2024-04-24 01:46:22.643467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:22.799 [2024-04-24 01:46:22.643563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:22.799 [2024-04-24 01:46:22.644094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:22.799 01:46:22 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:16:22.799 00:16:22.799 00:16:22.799 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.799 http://cunit.sourceforge.net/ 00:16:22.799 00:16:22.799 00:16:22.799 Suite: nvme_pcie 00:16:22.799 Test: test_prp_list_append ...passed 00:16:22.799 Test: test_nvme_pcie_hotplug_monitor ...[2024-04-24 01:46:22.683296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:16:22.799 [2024-04-24 01:46:22.683666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:16:22.799 [2024-04-24 01:46:22.683716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:16:22.799 [2024-04-24 01:46:22.683949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:16:22.799 [2024-04-24 01:46:22.684033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:16:22.799 passed 00:16:22.799 Test: test_shadow_doorbell_update ...passed 00:16:22.799 Test: test_build_contig_hw_sgl_request ...passed 00:16:22.799 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:16:22.799 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:16:22.799 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:16:22.799 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:16:22.799 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:16:22.799 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:16:22.799 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:16:22.799 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:16:22.800 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:16:22.800 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:16:22.800 00:16:22.800 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.800 suites 1 1 n/a 0 0 00:16:22.800 tests 14 14 14 0 0 00:16:22.800 asserts 235 235 235 0 n/a 00:16:22.800 00:16:22.800 Elapsed time = 0.002 seconds 00:16:22.800 [2024-04-24 01:46:22.684380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:16:22.800 [2024-04-24 01:46:22.684516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:16:22.800 [2024-04-24 01:46:22.684613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:16:22.800 [2024-04-24 01:46:22.684674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:16:22.800 [2024-04-24 01:46:22.684747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:16:22.800 01:46:22 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:16:22.800 00:16:22.800 00:16:22.800 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.800 http://cunit.sourceforge.net/ 00:16:22.800 00:16:22.800 00:16:22.800 Suite: nvme_ns_cmd 00:16:22.800 Test: nvme_poll_group_create_test ...passed 00:16:22.800 Test: nvme_poll_group_add_remove_test ...passed 00:16:22.800 Test: nvme_poll_group_process_completions ...passed 00:16:22.800 Test: nvme_poll_group_destroy_test ...passed 00:16:22.800 Test: nvme_poll_group_get_free_stats ...passed 00:16:22.800 00:16:22.800 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.800 suites 1 1 n/a 0 0 00:16:22.800 tests 5 5 5 0 0 00:16:22.800 asserts 75 75 75 0 n/a 00:16:22.800 00:16:22.800 Elapsed time = 0.000 seconds 00:16:22.800 01:46:22 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:16:22.800 00:16:22.800 00:16:22.800 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.800 http://cunit.sourceforge.net/ 00:16:22.800 00:16:22.800 00:16:22.800 Suite: nvme_quirks 00:16:22.800 Test: test_nvme_quirks_striping ...passed 00:16:22.800 00:16:22.800 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.800 suites 1 1 n/a 0 0 00:16:22.800 tests 1 1 1 0 0 00:16:22.800 asserts 5 5 5 0 n/a 00:16:22.800 00:16:22.800 Elapsed time = 0.000 seconds 00:16:22.800 01:46:22 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:16:22.800 00:16:22.800 00:16:22.800 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.800 http://cunit.sourceforge.net/ 00:16:22.800 00:16:22.800 00:16:22.800 Suite: nvme_tcp 00:16:22.800 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:16:22.800 Test: test_nvme_tcp_build_iovs ...passed 00:16:22.800 Test: test_nvme_tcp_build_sgl_request ...[2024-04-24 01:46:22.796945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffed2bff990, and the iovcnt=16, remaining_size=28672 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:16:22.800 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:16:22.800 Test: test_nvme_tcp_req_complete_safe ...passed 00:16:22.800 Test: test_nvme_tcp_req_get ...passed 00:16:22.800 Test: test_nvme_tcp_req_init ...passed 00:16:22.800 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:16:22.800 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:16:22.800 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-04-24 01:46:22.797649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c016c0 is same with the state(6) to be set 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_alloc_reqs ...passed 00:16:22.800 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-04-24 01:46:22.798018] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00850 is same with the state(5) to be set 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-24 01:46:22.798097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffed2c013a0 00:16:22.800 [2024-04-24 01:46:22.798161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1223:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:16:22.800 [2024-04-24 01:46:22.798256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1174:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:16:22.800 [2024-04-24 01:46:22.798414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:22.800 [2024-04-24 01:46:22.798513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.798752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-24 01:46:22.798811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00d10 is same with the state(5) to be set 00:16:22.800 [2024-04-24 01:46:22.799002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:16:22.800 [2024-04-24 01:46:22.799070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:16:22.800 [2024-04-24 01:46:22.799345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:16:22.800 Test: test_nvme_tcp_c2h_payload_handle ...[2024-04-24 01:46:22.799472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffed2c00ee0): PDU Sequence Error 00:16:22.800 passed 00:16:22.800 Test: test_nvme_tcp_icresp_handle ...[2024-04-24 01:46:22.799539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1564:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:16:22.801 [2024-04-24 01:46:22.799589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1571:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:16:22.801 [2024-04-24 01:46:22.799631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00860 is same with the state(5) to be set 00:16:22.801 [2024-04-24 01:46:22.799682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1580:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:16:22.801 [2024-04-24 01:46:22.799732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00860 is same with the state(5) to be set 00:16:22.801 [2024-04-24 01:46:22.799792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2c00860 is same with the state(0) to be set 00:16:22.801 passed 00:16:22.801 Test: test_nvme_tcp_pdu_payload_handle ...[2024-04-24 01:46:22.799861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffed2c013a0): PDU Sequence Error 00:16:22.801 passed 00:16:22.801 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:16:22.801 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-04-24 01:46:22.799948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffed2bffb30 00:16:22.801 passed 00:16:22.801 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-24 01:46:22.800192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffed2bff1b0, errno=0, rc=0 00:16:22.801 [2024-04-24 01:46:22.800254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2bff1b0 is same with the state(5) to be set 00:16:22.801 [2024-04-24 01:46:22.800336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffed2bff1b0 is same with the state(5) to be set 00:16:22.801 [2024-04-24 01:46:22.800394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffed2bff1b0 (0): Success 00:16:22.801 passed 00:16:22.801 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-24 01:46:22.800446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffed2bff1b0 (0): Success 00:16:23.058 [2024-04-24 01:46:22.941974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:16:23.058 [2024-04-24 01:46:22.942109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:16:23.058 passed 00:16:23.058 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:16:23.058 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:16:23.058 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-24 01:46:22.942334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:23.058 [2024-04-24 01:46:22.942382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:23.058 [2024-04-24 01:46:22.942649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:16:23.058 [2024-04-24 01:46:22.942702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:23.058 [2024-04-24 01:46:22.942838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:16:23.058 [2024-04-24 01:46:22.942904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:23.058 [2024-04-24 01:46:22.943038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000000c40 with addr=192.168.1.78, port=23 00:16:23.058 passed 00:16:23.058 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-24 01:46:22.943108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:23.058 [2024-04-24 01:46:22.943269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:16:23.058 [2024-04-24 01:46:22.943321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1017:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:16:23.058 passed 00:16:23.058 00:16:23.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.058 suites 1 1 n/a 0 0 00:16:23.058 tests 27 27 27 0 0 00:16:23.058 asserts 624 624 624 0 n/a 00:16:23.058 00:16:23.058 Elapsed time = 0.146 seconds 00:16:23.058 01:46:22 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:16:23.058 00:16:23.058 00:16:23.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.058 http://cunit.sourceforge.net/ 00:16:23.058 00:16:23.058 00:16:23.058 Suite: nvme_transport 00:16:23.058 Test: test_nvme_get_transport ...passed 00:16:23.058 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:16:23.058 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:16:23.058 Test: test_nvme_transport_poll_group_add_remove ...passed 00:16:23.058 Test: test_ctrlr_get_memory_domains ...passed 00:16:23.058 00:16:23.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.058 suites 1 1 n/a 0 0 00:16:23.058 tests 5 5 5 0 0 00:16:23.058 asserts 28 28 28 0 n/a 00:16:23.058 00:16:23.058 Elapsed time = 0.000 seconds 00:16:23.058 01:46:23 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:16:23.058 00:16:23.058 00:16:23.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.058 http://cunit.sourceforge.net/ 00:16:23.058 00:16:23.058 00:16:23.058 Suite: nvme_io_msg 00:16:23.058 Test: test_nvme_io_msg_send ...passed 00:16:23.058 Test: test_nvme_io_msg_process ...passed 00:16:23.058 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:16:23.058 00:16:23.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.058 suites 1 1 n/a 0 0 00:16:23.058 tests 3 3 3 0 0 00:16:23.058 asserts 56 56 56 0 n/a 00:16:23.058 00:16:23.058 Elapsed time = 0.000 seconds 00:16:23.058 01:46:23 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:16:23.058 00:16:23.058 00:16:23.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.058 http://cunit.sourceforge.net/ 00:16:23.058 00:16:23.058 00:16:23.058 Suite: nvme_pcie_common 00:16:23.058 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-24 01:46:23.072212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:16:23.058 passed 00:16:23.058 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:16:23.058 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:16:23.058 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-24 01:46:23.072944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:16:23.058 [2024-04-24 01:46:23.073072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:16:23.058 passed 00:16:23.058 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-04-24 01:46:23.073116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:16:23.058 passed 00:16:23.058 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-24 01:46:23.073577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:23.058 [2024-04-24 01:46:23.073628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:23.058 passed 00:16:23.058 00:16:23.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.058 suites 1 1 n/a 0 0 00:16:23.058 tests 6 6 6 0 0 00:16:23.058 asserts 148 148 148 0 n/a 00:16:23.058 00:16:23.058 Elapsed time = 0.002 seconds 00:16:23.058 01:46:23 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:16:23.058 00:16:23.058 00:16:23.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.058 http://cunit.sourceforge.net/ 00:16:23.058 00:16:23.058 00:16:23.058 Suite: nvme_fabric 00:16:23.058 Test: test_nvme_fabric_prop_set_cmd ...passed 00:16:23.058 Test: test_nvme_fabric_prop_get_cmd ...passed 00:16:23.058 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:16:23.058 Test: test_nvme_fabric_discover_probe ...passed 00:16:23.058 Test: test_nvme_fabric_qpair_connect ...[2024-04-24 01:46:23.107134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:16:23.058 passed 00:16:23.058 00:16:23.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.058 suites 1 1 n/a 0 0 00:16:23.058 tests 5 5 5 0 0 00:16:23.058 asserts 60 60 60 0 n/a 00:16:23.058 00:16:23.058 Elapsed time = 0.001 seconds 00:16:23.058 01:46:23 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:16:23.315 00:16:23.315 00:16:23.315 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.315 http://cunit.sourceforge.net/ 00:16:23.315 00:16:23.315 00:16:23.315 Suite: nvme_opal 00:16:23.315 Test: test_opal_nvme_security_recv_send_done ...passed 00:16:23.315 Test: test_opal_add_short_atom_header ...passed 00:16:23.315 00:16:23.315 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.315 suites 1 1 n/a 0 0 00:16:23.315 tests 2 2 2 0 0 00:16:23.315 asserts 22 22 22 0 n/a 00:16:23.315 00:16:23.315 Elapsed time = 0.000 seconds 00:16:23.315 [2024-04-24 01:46:23.149498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:16:23.315 ************************************ 00:16:23.315 END TEST unittest_nvme 00:16:23.315 ************************************ 00:16:23.315 00:16:23.315 real 0m1.368s 00:16:23.315 user 0m0.699s 00:16:23.315 sys 0m0.529s 00:16:23.315 01:46:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.315 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.315 01:46:23 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:16:23.315 01:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:23.315 01:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.315 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.315 ************************************ 00:16:23.315 START TEST unittest_log 00:16:23.315 ************************************ 00:16:23.315 01:46:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:16:23.315 00:16:23.315 00:16:23.315 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.315 http://cunit.sourceforge.net/ 00:16:23.315 00:16:23.315 00:16:23.315 Suite: log 00:16:23.315 Test: log_test ...[2024-04-24 01:46:23.283057] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:16:23.315 [2024-04-24 01:46:23.283729] log_ut.c: 57:log_test: *DEBUG*: log test 00:16:23.315 log dump test: 00:16:23.315 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:16:23.315 spdk dump test: 00:16:23.315 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:16:23.315 spdk dump test: 00:16:23.315 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:16:23.315 00000010 65 20 63 68 61 72 73 e chars 00:16:23.315 passed 00:16:24.247 Test: deprecation ...passed 00:16:24.247 00:16:24.247 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.247 suites 1 1 n/a 0 0 00:16:24.247 tests 2 2 2 0 0 00:16:24.247 asserts 73 73 73 0 n/a 00:16:24.247 00:16:24.247 Elapsed time = 0.002 seconds 00:16:24.247 00:16:24.247 real 0m1.039s 00:16:24.247 user 0m0.016s 00:16:24.247 sys 0m0.024s 00:16:24.247 01:46:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.247 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.247 ************************************ 00:16:24.247 END TEST unittest_log 00:16:24.247 ************************************ 00:16:24.506 01:46:24 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:16:24.506 01:46:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.506 01:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.506 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.506 ************************************ 00:16:24.506 START TEST unittest_lvol 00:16:24.506 ************************************ 00:16:24.506 01:46:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:16:24.506 00:16:24.506 00:16:24.506 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.506 http://cunit.sourceforge.net/ 00:16:24.506 00:16:24.506 00:16:24.506 Suite: lvol 00:16:24.506 Test: lvs_init_unload_success ...[2024-04-24 01:46:24.429553] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:16:24.506 passed 00:16:24.506 Test: lvs_init_destroy_success ...[2024-04-24 01:46:24.430003] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:16:24.506 passed 00:16:24.506 Test: lvs_init_opts_success ...passed 00:16:24.506 Test: lvs_unload_lvs_is_null_fail ...passed 00:16:24.506 Test: lvs_names ...[2024-04-24 01:46:24.430222] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:16:24.506 [2024-04-24 01:46:24.430267] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:16:24.506 [2024-04-24 01:46:24.430316] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:16:24.506 [2024-04-24 01:46:24.430465] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:16:24.506 passed 00:16:24.506 Test: lvol_create_destroy_success ...passed 00:16:24.506 Test: lvol_create_fail ...[2024-04-24 01:46:24.430967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:16:24.506 [2024-04-24 01:46:24.431058] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:16:24.506 passed 00:16:24.506 Test: lvol_destroy_fail ...[2024-04-24 01:46:24.431325] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:16:24.506 passed 00:16:24.506 Test: lvol_close ...[2024-04-24 01:46:24.431482] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:16:24.506 [2024-04-24 01:46:24.431527] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:16:24.506 passed 00:16:24.506 Test: lvol_resize ...passed 00:16:24.506 Test: lvol_set_read_only ...passed 00:16:24.506 Test: test_lvs_load ...[2024-04-24 01:46:24.432247] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:16:24.506 [2024-04-24 01:46:24.432294] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:16:24.506 passed 00:16:24.506 Test: lvols_load ...[2024-04-24 01:46:24.432484] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:16:24.506 [2024-04-24 01:46:24.432570] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:16:24.506 passed 00:16:24.506 Test: lvol_open ...passed 00:16:24.506 Test: lvol_snapshot ...passed 00:16:24.506 Test: lvol_snapshot_fail ...[2024-04-24 01:46:24.433202] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:16:24.506 passed 00:16:24.506 Test: lvol_clone ...passed 00:16:24.506 Test: lvol_clone_fail ...[2024-04-24 01:46:24.433656] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:16:24.506 passed 00:16:24.506 Test: lvol_iter_clones ...passed 00:16:24.506 Test: lvol_refcnt ...[2024-04-24 01:46:24.434089] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol b5f2082b-645a-413c-86ce-d2389886006e because it is still open 00:16:24.506 passed 00:16:24.506 Test: lvol_names ...[2024-04-24 01:46:24.434269] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:16:24.506 [2024-04-24 01:46:24.434350] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:16:24.506 [2024-04-24 01:46:24.434573] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:16:24.506 passed 00:16:24.506 Test: lvol_create_thin_provisioned ...passed 00:16:24.506 Test: lvol_rename ...[2024-04-24 01:46:24.434946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:16:24.506 [2024-04-24 01:46:24.435030] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:16:24.506 passed 00:16:24.506 Test: lvs_rename ...[2024-04-24 01:46:24.435240] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:16:24.506 passed 00:16:24.506 Test: lvol_inflate ...[2024-04-24 01:46:24.435427] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:16:24.506 passed 00:16:24.506 Test: lvol_decouple_parent ...[2024-04-24 01:46:24.435621] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:16:24.506 passed 00:16:24.506 Test: lvol_get_xattr ...passed 00:16:24.506 Test: lvol_esnap_reload ...passed 00:16:24.506 Test: lvol_esnap_create_bad_args ...[2024-04-24 01:46:24.435980] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:16:24.506 [2024-04-24 01:46:24.436014] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:16:24.506 [2024-04-24 01:46:24.436060] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:16:24.506 [2024-04-24 01:46:24.436183] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:16:24.506 passed 00:16:24.506 Test: lvol_esnap_create_delete ...[2024-04-24 01:46:24.436275] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:16:24.506 passed 00:16:24.507 Test: lvol_esnap_load_esnaps ...[2024-04-24 01:46:24.436544] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:16:24.507 passed 00:16:24.507 Test: lvol_esnap_missing ...[2024-04-24 01:46:24.436650] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:16:24.507 [2024-04-24 01:46:24.436684] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:16:24.507 passed 00:16:24.507 Test: lvol_esnap_hotplug ... 00:16:24.507 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:16:24.507 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:16:24.507 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:16:24.507 [2024-04-24 01:46:24.437189] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol e7e0a183-806e-40d7-a765-720ea5016cb6: failed to create esnap bs_dev: error -12 00:16:24.507 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:16:24.507 [2024-04-24 01:46:24.437350] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4fafcc5f-6c5d-441b-9253-3bb3eb71478d: failed to create esnap bs_dev: error -12 00:16:24.507 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:16:24.507 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:16:24.507 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:16:24.507 [2024-04-24 01:46:24.437428] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol ee361e1c-1bab-4c55-b90f-66aaa618facd: failed to create esnap bs_dev: error -12 00:16:24.507 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:16:24.507 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:16:24.507 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:16:24.507 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:16:24.507 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:16:24.507 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:16:24.507 passed 00:16:24.507 Test: lvol_get_by ...passed 00:16:24.507 00:16:24.507 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.507 suites 1 1 n/a 0 0 00:16:24.507 tests 34 34 34 0 0 00:16:24.507 asserts 1439 1439 1439 0 n/a 00:16:24.507 00:16:24.507 Elapsed time = 0.009 seconds 00:16:24.507 00:16:24.507 real 0m0.048s 00:16:24.507 user 0m0.024s 00:16:24.507 sys 0m0.024s 00:16:24.507 01:46:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.507 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.507 ************************************ 00:16:24.507 END TEST unittest_lvol 00:16:24.507 ************************************ 00:16:24.507 01:46:24 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:24.507 01:46:24 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:16:24.507 01:46:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.507 01:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.507 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.507 ************************************ 00:16:24.507 START TEST unittest_nvme_rdma 00:16:24.507 ************************************ 00:16:24.507 01:46:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:16:24.507 00:16:24.507 00:16:24.507 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.507 http://cunit.sourceforge.net/ 00:16:24.507 00:16:24.507 00:16:24.507 Suite: nvme_rdma 00:16:24.507 Test: test_nvme_rdma_build_sgl_request ...[2024-04-24 01:46:24.587844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:16:24.507 [2024-04-24 01:46:24.588219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:16:24.507 [2024-04-24 01:46:24.588339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:16:24.507 Test: test_nvme_rdma_build_contig_request ...[2024-04-24 01:46:24.588449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:16:24.507 Test: test_nvme_rdma_create_reqs ...[2024-04-24 01:46:24.588592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_create_rsps ...[2024-04-24 01:46:24.588963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-24 01:46:24.589200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:16:24.507 [2024-04-24 01:46:24.589273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_poller_create ...passed 00:16:24.507 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:16:24.507 Test: test_nvme_rdma_ctrlr_construct ...[2024-04-24 01:46:24.589486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_req_put_and_get ...passed 00:16:24.507 Test: test_nvme_rdma_req_init ...passed 00:16:24.507 Test: test_nvme_rdma_validate_cm_event ...[2024-04-24 01:46:24.589814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_qpair_init ...passed 00:16:24.507 Test: test_nvme_rdma_qpair_submit_request ...[2024-04-24 01:46:24.589865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_memory_domain ...[2024-04-24 01:46:24.590083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:16:24.507 passed 00:16:24.507 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:16:24.507 Test: test_rdma_get_memory_translation ...passed 00:16:24.507 Test: test_get_rdma_qpair_from_wc ...passed 00:16:24.507 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:16:24.507 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-24 01:46:24.590212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:16:24.507 [2024-04-24 01:46:24.590286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:16:24.507 [2024-04-24 01:46:24.590391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:24.507 passed 00:16:24.507 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-24 01:46:24.590446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:16:24.507 [2024-04-24 01:46:24.590647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:16:24.508 [2024-04-24 01:46:24.590709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:16:24.508 [2024-04-24 01:46:24.590761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcf50ce370 on poll group 0x60c000000040 00:16:24.508 [2024-04-24 01:46:24.590835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:16:24.767 [2024-04-24 01:46:24.590895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:16:24.767 [2024-04-24 01:46:24.590941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcf50ce370 on poll group 0x60c000000040 00:16:24.767 [2024-04-24 01:46:24.591027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:16:24.767 passed 00:16:24.767 00:16:24.767 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.767 suites 1 1 n/a 0 0 00:16:24.767 tests 22 22 22 0 0 00:16:24.767 asserts 412 412 412 0 n/a 00:16:24.767 00:16:24.767 Elapsed time = 0.003 seconds 00:16:24.767 00:16:24.767 real 0m0.046s 00:16:24.767 user 0m0.023s 00:16:24.767 sys 0m0.023s 00:16:24.767 01:46:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.767 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.767 ************************************ 00:16:24.767 END TEST unittest_nvme_rdma 00:16:24.767 ************************************ 00:16:24.767 01:46:24 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:16:24.767 01:46:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.767 01:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.767 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.767 ************************************ 00:16:24.767 START TEST unittest_nvmf_transport 00:16:24.767 ************************************ 00:16:24.767 01:46:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:16:24.767 00:16:24.767 00:16:24.767 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.767 http://cunit.sourceforge.net/ 00:16:24.767 00:16:24.767 00:16:24.767 Suite: nvmf 00:16:24.767 Test: test_spdk_nvmf_transport_create ...[2024-04-24 01:46:24.743007] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:16:24.767 [2024-04-24 01:46:24.743418] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:16:24.767 [2024-04-24 01:46:24.743499] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:16:24.767 [2024-04-24 01:46:24.743643] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:16:24.767 passed 00:16:24.767 Test: test_nvmf_transport_poll_group_create ...passed 00:16:24.767 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-24 01:46:24.743949] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:16:24.767 [2024-04-24 01:46:24.744058] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:16:24.767 [2024-04-24 01:46:24.744104] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:16:24.767 passed 00:16:24.767 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:16:24.767 00:16:24.767 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.767 suites 1 1 n/a 0 0 00:16:24.767 tests 4 4 4 0 0 00:16:24.767 asserts 49 49 49 0 n/a 00:16:24.767 00:16:24.767 Elapsed time = 0.001 seconds 00:16:24.767 00:16:24.767 real 0m0.046s 00:16:24.767 user 0m0.020s 00:16:24.767 sys 0m0.025s 00:16:24.767 01:46:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.767 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.767 ************************************ 00:16:24.767 END TEST unittest_nvmf_transport 00:16:24.767 ************************************ 00:16:24.767 01:46:24 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:16:24.767 01:46:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.767 01:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.767 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 ************************************ 00:16:25.026 START TEST unittest_rdma 00:16:25.026 ************************************ 00:16:25.026 01:46:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:16:25.026 00:16:25.026 00:16:25.026 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.026 http://cunit.sourceforge.net/ 00:16:25.026 00:16:25.026 00:16:25.026 Suite: rdma_common 00:16:25.026 Test: test_spdk_rdma_pd ...[2024-04-24 01:46:24.880698] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:16:25.026 [2024-04-24 01:46:24.881504] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:16:25.026 passed 00:16:25.026 00:16:25.026 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.026 suites 1 1 n/a 0 0 00:16:25.026 tests 1 1 1 0 0 00:16:25.026 asserts 31 31 31 0 n/a 00:16:25.026 00:16:25.026 Elapsed time = 0.001 seconds 00:16:25.026 00:16:25.026 real 0m0.033s 00:16:25.026 user 0m0.013s 00:16:25.026 sys 0m0.021s 00:16:25.026 01:46:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.026 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 ************************************ 00:16:25.026 END TEST unittest_rdma 00:16:25.026 ************************************ 00:16:25.026 01:46:24 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:25.026 01:46:24 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:16:25.026 01:46:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.026 01:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.026 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 ************************************ 00:16:25.026 START TEST unittest_nvme_cuse 00:16:25.026 ************************************ 00:16:25.026 01:46:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:16:25.026 00:16:25.026 00:16:25.026 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.026 http://cunit.sourceforge.net/ 00:16:25.026 00:16:25.026 00:16:25.026 Suite: nvme_cuse 00:16:25.026 Test: test_cuse_nvme_submit_io_read_write ...passed 00:16:25.026 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:16:25.026 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:16:25.026 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:16:25.026 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:16:25.026 Test: test_cuse_nvme_submit_io ...[2024-04-24 01:46:25.015963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:16:25.026 passed 00:16:25.026 Test: test_cuse_nvme_reset ...[2024-04-24 01:46:25.016535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:16:25.026 passed 00:16:25.026 Test: test_nvme_cuse_stop ...passed 00:16:25.026 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:16:25.026 00:16:25.026 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.026 suites 1 1 n/a 0 0 00:16:25.026 tests 9 9 9 0 0 00:16:25.026 asserts 118 118 118 0 n/a 00:16:25.026 00:16:25.026 Elapsed time = 0.005 seconds 00:16:25.026 00:16:25.026 real 0m0.041s 00:16:25.026 user 0m0.026s 00:16:25.026 sys 0m0.017s 00:16:25.026 01:46:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.026 01:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 ************************************ 00:16:25.026 END TEST unittest_nvme_cuse 00:16:25.026 ************************************ 00:16:25.026 01:46:25 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:16:25.026 01:46:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.026 01:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.026 01:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.287 ************************************ 00:16:25.287 START TEST unittest_nvmf 00:16:25.287 ************************************ 00:16:25.287 01:46:25 -- common/autotest_common.sh@1111 -- # unittest_nvmf 00:16:25.287 01:46:25 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:16:25.287 00:16:25.287 00:16:25.287 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.287 http://cunit.sourceforge.net/ 00:16:25.287 00:16:25.287 00:16:25.287 Suite: nvmf 00:16:25.287 Test: test_get_log_page ...[2024-04-24 01:46:25.149845] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2576:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:16:25.287 passed 00:16:25.287 Test: test_process_fabrics_cmd ...passed 00:16:25.287 Test: test_connect ...[2024-04-24 01:46:25.151563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 970:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:16:25.287 [2024-04-24 01:46:25.152042] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 833:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:16:25.287 [2024-04-24 01:46:25.152389] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1009:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:16:25.287 [2024-04-24 01:46:25.152657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:16:25.287 [2024-04-24 01:46:25.152999] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 844:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:16:25.287 [2024-04-24 01:46:25.153261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 851:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:16:25.287 [2024-04-24 01:46:25.153680] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 857:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:16:25.287 [2024-04-24 01:46:25.153986] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 884:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:16:25.287 [2024-04-24 01:46:25.154371] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:16:25.287 [2024-04-24 01:46:25.154777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 637:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:16:25.287 [2024-04-24 01:46:25.155635] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 643:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:16:25.287 [2024-04-24 01:46:25.156034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 649:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:16:25.287 [2024-04-24 01:46:25.156760] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 656:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:16:25.287 [2024-04-24 01:46:25.157145] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 679:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:16:25.287 [2024-04-24 01:46:25.157633] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 256:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:16:25.287 [2024-04-24 01:46:25.158176] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:16:25.287 [2024-04-24 01:46:25.158592] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:16:25.287 passed 00:16:25.287 Test: test_get_ns_id_desc_list ...passed 00:16:25.287 Test: test_identify_ns ...[2024-04-24 01:46:25.160015] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:25.287 [2024-04-24 01:46:25.160956] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:16:25.287 [2024-04-24 01:46:25.161475] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:25.287 passed 00:16:25.287 Test: test_identify_ns_iocs_specific ...[2024-04-24 01:46:25.162360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:25.287 [2024-04-24 01:46:25.163254] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:25.287 passed 00:16:25.287 Test: test_reservation_write_exclusive ...passed 00:16:25.287 Test: test_reservation_exclusive_access ...passed 00:16:25.287 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:16:25.287 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:16:25.287 Test: test_reservation_notification_log_page ...passed 00:16:25.287 Test: test_get_dif_ctx ...passed 00:16:25.287 Test: test_set_get_features ...[2024-04-24 01:46:25.166338] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1606:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:16:25.287 [2024-04-24 01:46:25.166539] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1606:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:16:25.287 [2024-04-24 01:46:25.166710] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1617:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:16:25.287 [2024-04-24 01:46:25.166851] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1693:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:16:25.287 passed 00:16:25.287 Test: test_identify_ctrlr ...passed 00:16:25.287 Test: test_identify_ctrlr_iocs_specific ...passed 00:16:25.287 Test: test_custom_admin_cmd ...passed 00:16:25.287 Test: test_fused_compare_and_write ...[2024-04-24 01:46:25.168138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4177:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:16:25.287 [2024-04-24 01:46:25.168304] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4166:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:16:25.287 [2024-04-24 01:46:25.168439] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4184:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:16:25.287 passed 00:16:25.287 Test: test_multi_async_event_reqs ...passed 00:16:25.287 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:16:25.287 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:16:25.287 Test: test_multi_async_events ...passed 00:16:25.287 Test: test_rae ...passed 00:16:25.287 Test: test_nvmf_ctrlr_create_destruct ...passed 00:16:25.287 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:16:25.287 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-24 01:46:25.170163] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4304:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:16:25.287 passed 00:16:25.287 Test: test_zcopy_read ...passed 00:16:25.287 Test: test_zcopy_write ...passed 00:16:25.287 Test: test_nvmf_property_set ...passed 00:16:25.287 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-24 01:46:25.171052] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1904:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:16:25.287 [2024-04-24 01:46:25.171205] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1904:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:16:25.287 passed 00:16:25.288 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-24 01:46:25.171476] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1927:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:16:25.288 [2024-04-24 01:46:25.171609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1933:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:16:25.288 [2024-04-24 01:46:25.171737] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1945:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:16:25.288 passed 00:16:25.288 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:16:25.288 00:16:25.288 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.288 suites 1 1 n/a 0 0 00:16:25.288 tests 31 31 31 0 0 00:16:25.288 asserts 951 951 951 0 n/a 00:16:25.288 00:16:25.288 Elapsed time = 0.012 seconds 00:16:25.288 01:46:25 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:16:25.288 00:16:25.288 00:16:25.288 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.288 http://cunit.sourceforge.net/ 00:16:25.288 00:16:25.288 00:16:25.288 Suite: nvmf 00:16:25.288 Test: test_get_rw_params ...passed 00:16:25.288 Test: test_lba_in_range ...passed 00:16:25.288 Test: test_get_dif_ctx ...passed 00:16:25.288 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:16:25.288 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-24 01:46:25.223663] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:16:25.288 [2024-04-24 01:46:25.224108] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:16:25.288 [2024-04-24 01:46:25.224279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:16:25.288 passed 00:16:25.288 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-24 01:46:25.224384] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:16:25.288 passed 00:16:25.288 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-24 01:46:25.224511] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 960:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:16:25.288 [2024-04-24 01:46:25.224683] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:16:25.288 [2024-04-24 01:46:25.224745] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:16:25.288 [2024-04-24 01:46:25.224844] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:16:25.288 [2024-04-24 01:46:25.224910] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:16:25.288 passed 00:16:25.288 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:16:25.288 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:16:25.288 00:16:25.288 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.288 suites 1 1 n/a 0 0 00:16:25.288 tests 9 9 9 0 0 00:16:25.288 asserts 157 157 157 0 n/a 00:16:25.288 00:16:25.288 Elapsed time = 0.002 seconds 00:16:25.288 01:46:25 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:16:25.288 00:16:25.288 00:16:25.288 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.288 http://cunit.sourceforge.net/ 00:16:25.288 00:16:25.288 00:16:25.288 Suite: nvmf 00:16:25.288 Test: test_discovery_log ...passed 00:16:25.288 Test: test_discovery_log_with_filters ...passed 00:16:25.288 00:16:25.288 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.288 suites 1 1 n/a 0 0 00:16:25.288 tests 2 2 2 0 0 00:16:25.288 asserts 238 238 238 0 n/a 00:16:25.288 00:16:25.288 Elapsed time = 0.003 seconds 00:16:25.288 01:46:25 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:16:25.288 00:16:25.288 00:16:25.288 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.288 http://cunit.sourceforge.net/ 00:16:25.288 00:16:25.288 00:16:25.288 Suite: nvmf 00:16:25.288 Test: nvmf_test_create_subsystem ...[2024-04-24 01:46:25.308823] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:16:25.288 [2024-04-24 01:46:25.309141] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:16:25.288 [2024-04-24 01:46:25.309325] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:16:25.288 [2024-04-24 01:46:25.309437] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:16:25.288 [2024-04-24 01:46:25.309492] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:16:25.288 [2024-04-24 01:46:25.309545] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:16:25.288 [2024-04-24 01:46:25.309589] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:16:25.288 [2024-04-24 01:46:25.309653] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:16:25.288 [2024-04-24 01:46:25.309696] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:16:25.288 [2024-04-24 01:46:25.309745] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:16:25.288 [2024-04-24 01:46:25.309778] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:16:25.288 [2024-04-24 01:46:25.309829] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:16:25.288 [2024-04-24 01:46:25.309970] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:16:25.288 [2024-04-24 01:46:25.310089] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:16:25.288 [2024-04-24 01:46:25.310212] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:16:25.288 [2024-04-24 01:46:25.310270] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:16:25.288 [2024-04-24 01:46:25.310387] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:16:25.288 [2024-04-24 01:46:25.310449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:16:25.288 [2024-04-24 01:46:25.310497] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:16:25.288 [2024-04-24 01:46:25.310579] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:16:25.288 passed 00:16:25.288 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-24 01:46:25.310620] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:16:25.288 [2024-04-24 01:46:25.310660] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:16:25.288 [2024-04-24 01:46:25.310826] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:16:25.288 [2024-04-24 01:46:25.310884] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1881:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:16:25.288 passed 00:16:25.288 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:16:25.289 Test: test_spdk_nvmf_ns_visible ...[2024-04-24 01:46:25.311127] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:16:25.289 passed 00:16:25.289 Test: test_reservation_register ...[2024-04-24 01:46:25.311579] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 [2024-04-24 01:46:25.311722] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2990:nvmf_ns_reservation_register: *ERROR*: No registrant 00:16:25.289 passed 00:16:25.289 Test: test_reservation_register_with_ptpl ...passed 00:16:25.289 Test: test_reservation_acquire_preempt_1 ...[2024-04-24 01:46:25.312839] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_acquire_release_with_ptpl ...passed 00:16:25.289 Test: test_reservation_release ...[2024-04-24 01:46:25.314649] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_unregister_notification ...[2024-04-24 01:46:25.315043] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_release_notification ...[2024-04-24 01:46:25.315337] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_release_notification_write_exclusive ...[2024-04-24 01:46:25.315613] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_clear_notification ...[2024-04-24 01:46:25.315905] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_reservation_preempt_notification ...[2024-04-24 01:46:25.316169] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:16:25.289 passed 00:16:25.289 Test: test_spdk_nvmf_ns_event ...passed 00:16:25.289 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:16:25.289 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:16:25.289 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-24 01:46:25.317022] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:16:25.289 passed 00:16:25.289 Test: test_nvmf_ns_reservation_report ...[2024-04-24 01:46:25.317110] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:16:25.289 [2024-04-24 01:46:25.317263] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3295:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:16:25.289 passed 00:16:25.289 Test: test_nvmf_nqn_is_valid ...[2024-04-24 01:46:25.317341] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:16:25.289 [2024-04-24 01:46:25.317393] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:40fcaa40-d255-4268-97a5-d718d03ad95": uuid is not the correct length 00:16:25.289 [2024-04-24 01:46:25.317437] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:16:25.289 passed 00:16:25.289 Test: test_nvmf_ns_reservation_restore ...passed 00:16:25.289 Test: test_nvmf_subsystem_state_change ...[2024-04-24 01:46:25.317591] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2489:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:16:25.289 passed 00:16:25.289 Test: test_nvmf_reservation_custom_ops ...passed 00:16:25.289 00:16:25.289 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.289 suites 1 1 n/a 0 0 00:16:25.289 tests 23 23 23 0 0 00:16:25.289 asserts 482 482 482 0 n/a 00:16:25.289 00:16:25.289 Elapsed time = 0.010 seconds 00:16:25.289 01:46:25 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:16:25.590 00:16:25.590 00:16:25.590 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.590 http://cunit.sourceforge.net/ 00:16:25.590 00:16:25.590 00:16:25.590 Suite: nvmf 00:16:25.590 Test: test_nvmf_tcp_create ...[2024-04-24 01:46:25.401812] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:16:25.590 passed 00:16:25.590 Test: test_nvmf_tcp_destroy ...passed 00:16:25.590 Test: test_nvmf_tcp_poll_group_create ...passed 00:16:25.590 Test: test_nvmf_tcp_send_c2h_data ...passed 00:16:25.590 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:16:25.590 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:16:25.590 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:16:25.590 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-24 01:46:25.534573] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.590 passed 00:16:25.590 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:16:25.590 Test: test_nvmf_tcp_icreq_handle ...[2024-04-24 01:46:25.534701] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.590 [2024-04-24 01:46:25.534840] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.590 [2024-04-24 01:46:25.534920] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.590 [2024-04-24 01:46:25.534982] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.590 [2024-04-24 01:46:25.535149] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2110:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:16:25.590 [2024-04-24 01:46:25.535291] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.535386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.535450] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2110:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:16:25.591 [2024-04-24 01:46:25.535517] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.535579] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.535643] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.535711] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.535815] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_check_xfer_type ...passed 00:16:25.591 Test: test_nvmf_tcp_invalid_sgl ...[2024-04-24 01:46:25.535919] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2505:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:16:25.591 [2024-04-24 01:46:25.535988] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-24 01:46:25.536044] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037790 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.536145] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2237:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff6f0384f0 00:16:25.591 [2024-04-24 01:46:25.536278] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.536366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.536448] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2294:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff6f037c50 00:16:25.591 [2024-04-24 01:46:25.536521] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.536591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.536653] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2247:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:16:25.591 [2024-04-24 01:46:25.536722] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.536807] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.536883] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2286:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:16:25.591 [2024-04-24 01:46:25.536946] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537010] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.537069] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537134] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.537235] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537292] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.537388] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.537515] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537573] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 [2024-04-24 01:46:25.537674] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537731] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-04-24 01:46:25.537807] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1085:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:16:25.591 [2024-04-24 01:46:25.537879] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1596:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff6f037c50 is same with the state(5) to be set 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-24 01:46:25.574491] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-24 01:46:25.574652] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:16:25.591 [2024-04-24 01:46:25.575302] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:16:25.591 [2024-04-24 01:46:25.575432] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:16:25.591 passed 00:16:25.591 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-24 01:46:25.575801] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:16:25.591 passed 00:16:25.591 00:16:25.591 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.591 suites 1 1 n/a 0 0 00:16:25.591 tests 17 17 17 0 0 00:16:25.591 asserts 222 222 222 0 n/a 00:16:25.591 00:16:25.591 Elapsed time = 0.203 seconds[2024-04-24 01:46:25.575884] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:16:25.591 00:16:25.886 01:46:25 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:16:25.886 00:16:25.886 00:16:25.886 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.886 http://cunit.sourceforge.net/ 00:16:25.886 00:16:25.886 00:16:25.886 Suite: nvmf 00:16:25.886 Test: test_nvmf_tgt_create_poll_group ...passed 00:16:25.886 00:16:25.886 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.886 suites 1 1 n/a 0 0 00:16:25.886 tests 1 1 1 0 0 00:16:25.886 asserts 17 17 17 0 n/a 00:16:25.886 00:16:25.886 Elapsed time = 0.028 seconds 00:16:25.886 00:16:25.886 real 0m0.665s 00:16:25.886 user 0m0.281s 00:16:25.886 sys 0m0.375s 00:16:25.886 01:46:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.886 01:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.886 ************************************ 00:16:25.886 END TEST unittest_nvmf 00:16:25.886 ************************************ 00:16:25.886 01:46:25 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:25.886 01:46:25 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:25.886 01:46:25 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:16:25.886 01:46:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.886 01:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.886 01:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.886 ************************************ 00:16:25.886 START TEST unittest_nvmf_rdma 00:16:25.886 ************************************ 00:16:25.886 01:46:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:16:25.886 00:16:25.886 00:16:25.886 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.886 http://cunit.sourceforge.net/ 00:16:25.886 00:16:25.886 00:16:25.886 Suite: nvmf 00:16:25.886 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-24 01:46:25.927078] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1847:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:16:25.886 [2024-04-24 01:46:25.927911] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1897:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:16:25.886 [2024-04-24 01:46:25.927981] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1897:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:16:25.886 passed 00:16:25.886 Test: test_spdk_nvmf_rdma_request_process ...passed 00:16:25.886 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:16:25.886 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:16:25.886 Test: test_nvmf_rdma_opts_init ...passed 00:16:25.886 Test: test_nvmf_rdma_request_free_data ...passed 00:16:25.886 Test: test_nvmf_rdma_resources_create ...passed 00:16:25.886 Test: test_nvmf_rdma_qpair_compare ...passed 00:16:25.886 Test: test_nvmf_rdma_resize_cq ...[2024-04-24 01:46:25.932047] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 935:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:16:25.886 Using CQ of insufficient size may lead to CQ overrun 00:16:25.886 [2024-04-24 01:46:25.932183] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 940:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:16:25.886 passed 00:16:25.886 00:16:25.886 [2024-04-24 01:46:25.932264] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 948:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:16:25.886 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.886 suites 1 1 n/a 0 0 00:16:25.886 tests 9 9 9 0 0 00:16:25.886 asserts 579 579 579 0 n/a 00:16:25.886 00:16:25.886 Elapsed time = 0.005 seconds 00:16:25.886 00:16:25.886 real 0m0.058s 00:16:25.886 user 0m0.028s 00:16:25.886 sys 0m0.031s 00:16:25.886 01:46:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.886 01:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.886 ************************************ 00:16:25.886 END TEST unittest_nvmf_rdma 00:16:25.886 ************************************ 00:16:26.146 01:46:26 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:26.146 01:46:26 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:16:26.146 01:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.146 01:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.146 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.146 ************************************ 00:16:26.146 START TEST unittest_scsi 00:16:26.146 ************************************ 00:16:26.146 01:46:26 -- common/autotest_common.sh@1111 -- # unittest_scsi 00:16:26.146 01:46:26 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:16:26.146 00:16:26.146 00:16:26.146 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.146 http://cunit.sourceforge.net/ 00:16:26.146 00:16:26.146 00:16:26.146 Suite: dev_suite 00:16:26.146 Test: dev_destruct_null_dev ...passed 00:16:26.146 Test: dev_destruct_zero_luns ...passed 00:16:26.146 Test: dev_destruct_null_lun ...passed 00:16:26.146 Test: dev_destruct_success ...passed 00:16:26.146 Test: dev_construct_num_luns_zero ...[2024-04-24 01:46:26.087710] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:16:26.146 passed 00:16:26.146 Test: dev_construct_no_lun_zero ...[2024-04-24 01:46:26.088057] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:16:26.146 passed 00:16:26.146 Test: dev_construct_null_lun ...passed 00:16:26.146 Test: dev_construct_name_too_long ...passed 00:16:26.146 Test: dev_construct_success ...passed[2024-04-24 01:46:26.088161] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:16:26.146 [2024-04-24 01:46:26.088220] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:16:26.146 00:16:26.146 Test: dev_construct_success_lun_zero_not_first ...passed 00:16:26.146 Test: dev_queue_mgmt_task_success ...passed 00:16:26.146 Test: dev_queue_task_success ...passed 00:16:26.146 Test: dev_stop_success ...passed 00:16:26.146 Test: dev_add_port_max_ports ...[2024-04-24 01:46:26.088551] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:16:26.146 passed 00:16:26.146 Test: dev_add_port_construct_failure1 ...[2024-04-24 01:46:26.088658] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:16:26.146 passed 00:16:26.146 Test: dev_add_port_construct_failure2 ...[2024-04-24 01:46:26.088760] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:16:26.146 passed 00:16:26.146 Test: dev_add_port_success1 ...passed 00:16:26.146 Test: dev_add_port_success2 ...passed 00:16:26.146 Test: dev_add_port_success3 ...passed 00:16:26.146 Test: dev_find_port_by_id_num_ports_zero ...passed 00:16:26.146 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:16:26.146 Test: dev_find_port_by_id_success ...passed 00:16:26.146 Test: dev_add_lun_bdev_not_found ...passed 00:16:26.146 Test: dev_add_lun_no_free_lun_id ...[2024-04-24 01:46:26.089203] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:16:26.146 passed 00:16:26.146 Test: dev_add_lun_success1 ...passed 00:16:26.146 Test: dev_add_lun_success2 ...passed 00:16:26.146 Test: dev_check_pending_tasks ...passed 00:16:26.146 Test: dev_iterate_luns ...passed 00:16:26.146 Test: dev_find_free_lun ...passed 00:16:26.146 00:16:26.146 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.146 suites 1 1 n/a 0 0 00:16:26.146 tests 29 29 29 0 0 00:16:26.146 asserts 97 97 97 0 n/a 00:16:26.146 00:16:26.146 Elapsed time = 0.002 seconds 00:16:26.146 01:46:26 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:16:26.146 00:16:26.146 00:16:26.146 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.146 http://cunit.sourceforge.net/ 00:16:26.147 00:16:26.147 00:16:26.147 Suite: lun_suite 00:16:26.147 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-24 01:46:26.130797] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:16:26.147 passed 00:16:26.147 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-24 01:46:26.131197] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:16:26.147 passed 00:16:26.147 Test: lun_task_mgmt_execute_lun_reset ...passed 00:16:26.147 Test: lun_task_mgmt_execute_target_reset ...passed 00:16:26.147 Test: lun_task_mgmt_execute_invalid_case ...[2024-04-24 01:46:26.131375] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:16:26.147 passed 00:16:26.147 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:16:26.147 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:16:26.147 Test: lun_append_task_null_lun_not_supported ...passed 00:16:26.147 Test: lun_execute_scsi_task_pending ...passed 00:16:26.147 Test: lun_execute_scsi_task_complete ...passed 00:16:26.147 Test: lun_execute_scsi_task_resize ...passed 00:16:26.147 Test: lun_destruct_success ...passed 00:16:26.147 Test: lun_construct_null_ctx ...passed 00:16:26.147 Test: lun_construct_success ...[2024-04-24 01:46:26.131597] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:16:26.147 passed 00:16:26.147 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:16:26.147 Test: lun_reset_task_suspend_scsi_task ...passed 00:16:26.147 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:16:26.147 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:16:26.147 00:16:26.147 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.147 suites 1 1 n/a 0 0 00:16:26.147 tests 18 18 18 0 0 00:16:26.147 asserts 153 153 153 0 n/a 00:16:26.147 00:16:26.147 Elapsed time = 0.001 seconds 00:16:26.147 01:46:26 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:16:26.147 00:16:26.147 00:16:26.147 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.147 http://cunit.sourceforge.net/ 00:16:26.147 00:16:26.147 00:16:26.147 Suite: scsi_suite 00:16:26.147 Test: scsi_init ...passed 00:16:26.147 00:16:26.147 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.147 suites 1 1 n/a 0 0 00:16:26.147 tests 1 1 1 0 0 00:16:26.147 asserts 1 1 1 0 n/a 00:16:26.147 00:16:26.147 Elapsed time = 0.000 seconds 00:16:26.147 01:46:26 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:16:26.147 00:16:26.147 00:16:26.147 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.147 http://cunit.sourceforge.net/ 00:16:26.147 00:16:26.147 00:16:26.147 Suite: translation_suite 00:16:26.147 Test: mode_select_6_test ...passed 00:16:26.147 Test: mode_select_6_test2 ...passed 00:16:26.147 Test: mode_sense_6_test ...passed 00:16:26.147 Test: mode_sense_10_test ...passed 00:16:26.147 Test: inquiry_evpd_test ...passed 00:16:26.147 Test: inquiry_standard_test ...passed 00:16:26.147 Test: inquiry_overflow_test ...passed 00:16:26.147 Test: task_complete_test ...passed 00:16:26.147 Test: lba_range_test ...passed 00:16:26.147 Test: xfer_len_test ...[2024-04-24 01:46:26.210071] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:16:26.147 passed 00:16:26.147 Test: xfer_test ...passed 00:16:26.147 Test: scsi_name_padding_test ...passed 00:16:26.147 Test: get_dif_ctx_test ...passed 00:16:26.147 Test: unmap_split_test ...passed 00:16:26.147 00:16:26.147 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.147 suites 1 1 n/a 0 0 00:16:26.147 tests 14 14 14 0 0 00:16:26.147 asserts 1205 1205 1205 0 n/a 00:16:26.147 00:16:26.147 Elapsed time = 0.004 seconds 00:16:26.406 01:46:26 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:16:26.406 00:16:26.406 00:16:26.406 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.406 http://cunit.sourceforge.net/ 00:16:26.406 00:16:26.406 00:16:26.406 Suite: reservation_suite 00:16:26.406 Test: test_reservation_register ...[2024-04-24 01:46:26.245287] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 passed 00:16:26.406 Test: test_reservation_reserve ...[2024-04-24 01:46:26.245692] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 [2024-04-24 01:46:26.245775] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:16:26.406 [2024-04-24 01:46:26.245889] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:16:26.406 passed 00:16:26.406 Test: test_reservation_preempt_non_all_regs ...[2024-04-24 01:46:26.245978] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 [2024-04-24 01:46:26.246094] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:16:26.406 passed 00:16:26.406 Test: test_reservation_preempt_all_regs ...passed 00:16:26.406 Test: test_reservation_cmds_conflict ...[2024-04-24 01:46:26.246254] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 [2024-04-24 01:46:26.246397] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 [2024-04-24 01:46:26.246503] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:16:26.406 [2024-04-24 01:46:26.246574] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:16:26.406 [2024-04-24 01:46:26.246615] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:16:26.406 passed 00:16:26.406 Test: test_scsi2_reserve_release ...passed 00:16:26.406 Test: test_pr_with_scsi2_reserve_release ...passed 00:16:26.406 00:16:26.406 [2024-04-24 01:46:26.246669] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:16:26.406 [2024-04-24 01:46:26.246708] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:16:26.406 [2024-04-24 01:46:26.246815] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:16:26.406 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.406 suites 1 1 n/a 0 0 00:16:26.406 tests 7 7 7 0 0 00:16:26.406 asserts 257 257 257 0 n/a 00:16:26.406 00:16:26.406 Elapsed time = 0.002 seconds 00:16:26.406 00:16:26.406 real 0m0.200s 00:16:26.406 user 0m0.089s 00:16:26.407 sys 0m0.114s 00:16:26.407 01:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.407 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 ************************************ 00:16:26.407 END TEST unittest_scsi 00:16:26.407 ************************************ 00:16:26.407 01:46:26 -- unit/unittest.sh@276 -- # uname -s 00:16:26.407 01:46:26 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:16:26.407 01:46:26 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:16:26.407 01:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.407 01:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.407 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 ************************************ 00:16:26.407 START TEST unittest_sock 00:16:26.407 ************************************ 00:16:26.407 01:46:26 -- common/autotest_common.sh@1111 -- # unittest_sock 00:16:26.407 01:46:26 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:16:26.407 00:16:26.407 00:16:26.407 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.407 http://cunit.sourceforge.net/ 00:16:26.407 00:16:26.407 00:16:26.407 Suite: sock 00:16:26.407 Test: posix_sock ...passed 00:16:26.407 Test: ut_sock ...passed 00:16:26.407 Test: posix_sock_group ...passed 00:16:26.407 Test: ut_sock_group ...passed 00:16:26.407 Test: posix_sock_group_fairness ...passed 00:16:26.407 Test: _posix_sock_close ...passed 00:16:26.407 Test: sock_get_default_opts ...passed 00:16:26.407 Test: ut_sock_impl_get_set_opts ...passed 00:16:26.407 Test: posix_sock_impl_get_set_opts ...passed 00:16:26.407 Test: ut_sock_map ...passed 00:16:26.407 Test: override_impl_opts ...passed 00:16:26.407 Test: ut_sock_group_get_ctx ...passed 00:16:26.407 00:16:26.407 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.407 suites 1 1 n/a 0 0 00:16:26.407 tests 12 12 12 0 0 00:16:26.407 asserts 349 349 349 0 n/a 00:16:26.407 00:16:26.407 Elapsed time = 0.007 seconds 00:16:26.407 01:46:26 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:16:26.407 00:16:26.407 00:16:26.407 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.407 http://cunit.sourceforge.net/ 00:16:26.407 00:16:26.407 00:16:26.407 Suite: posix 00:16:26.407 Test: flush ...passed 00:16:26.407 00:16:26.407 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.407 suites 1 1 n/a 0 0 00:16:26.407 tests 1 1 1 0 0 00:16:26.407 asserts 28 28 28 0 n/a 00:16:26.407 00:16:26.407 Elapsed time = 0.000 seconds 00:16:26.666 01:46:26 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:26.666 00:16:26.666 real 0m0.125s 00:16:26.666 user 0m0.037s 00:16:26.666 sys 0m0.066s 00:16:26.666 01:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.666 ************************************ 00:16:26.666 END TEST unittest_sock 00:16:26.666 ************************************ 00:16:26.666 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.666 01:46:26 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:16:26.666 01:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.666 01:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.666 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.666 ************************************ 00:16:26.666 START TEST unittest_thread 00:16:26.666 ************************************ 00:16:26.666 01:46:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:16:26.666 00:16:26.666 00:16:26.666 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.666 http://cunit.sourceforge.net/ 00:16:26.666 00:16:26.666 00:16:26.666 Suite: io_channel 00:16:26.666 Test: thread_alloc ...passed 00:16:26.666 Test: thread_send_msg ...passed 00:16:26.666 Test: thread_poller ...passed 00:16:26.666 Test: poller_pause ...passed 00:16:26.666 Test: thread_for_each ...passed 00:16:26.666 Test: for_each_channel_remove ...passed 00:16:26.666 Test: for_each_channel_unreg ...[2024-04-24 01:46:26.654004] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffeb4fa5970 already registered (old:0x613000000200 new:0x6130000003c0) 00:16:26.666 passed 00:16:26.666 Test: thread_name ...passed 00:16:26.666 Test: channel ...[2024-04-24 01:46:26.658275] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x559124caed20 00:16:26.666 passed 00:16:26.666 Test: channel_destroy_races ...passed 00:16:26.666 Test: thread_exit_test ...[2024-04-24 01:46:26.663556] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:16:26.666 passed 00:16:26.666 Test: thread_update_stats_test ...passed 00:16:26.666 Test: nested_channel ...passed 00:16:26.666 Test: device_unregister_and_thread_exit_race ...passed 00:16:26.666 Test: cache_closest_timed_poller ...passed 00:16:26.666 Test: multi_timed_pollers_have_same_expiration ...passed 00:16:26.666 Test: io_device_lookup ...passed 00:16:26.666 Test: spdk_spin ...[2024-04-24 01:46:26.674736] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:16:26.666 [2024-04-24 01:46:26.674796] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffeb4fa5960 00:16:26.666 [2024-04-24 01:46:26.674915] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:16:26.666 [2024-04-24 01:46:26.676674] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:16:26.666 [2024-04-24 01:46:26.676758] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffeb4fa5960 00:16:26.666 [2024-04-24 01:46:26.676793] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:16:26.666 [2024-04-24 01:46:26.676835] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffeb4fa5960 00:16:26.666 [2024-04-24 01:46:26.676885] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:16:26.666 [2024-04-24 01:46:26.676936] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffeb4fa5960 00:16:26.666 [2024-04-24 01:46:26.676971] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:16:26.666 [2024-04-24 01:46:26.677027] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffeb4fa5960 00:16:26.666 passed 00:16:26.666 Test: for_each_channel_and_thread_exit_race ...passed 00:16:26.666 Test: for_each_thread_and_thread_exit_race ...passed 00:16:26.666 00:16:26.666 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.666 suites 1 1 n/a 0 0 00:16:26.666 tests 20 20 20 0 0 00:16:26.666 asserts 409 409 409 0 n/a 00:16:26.666 00:16:26.666 Elapsed time = 0.052 seconds 00:16:26.666 00:16:26.666 real 0m0.104s 00:16:26.666 user 0m0.079s 00:16:26.666 sys 0m0.026s 00:16:26.666 01:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.666 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.666 ************************************ 00:16:26.666 END TEST unittest_thread 00:16:26.666 ************************************ 00:16:26.926 01:46:26 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:16:26.926 01:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.926 01:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.926 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.926 ************************************ 00:16:26.926 START TEST unittest_iobuf 00:16:26.926 ************************************ 00:16:26.926 01:46:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:16:26.926 00:16:26.926 00:16:26.926 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.926 http://cunit.sourceforge.net/ 00:16:26.926 00:16:26.926 00:16:26.926 Suite: io_channel 00:16:26.926 Test: iobuf ...passed 00:16:26.926 Test: iobuf_cache ...[2024-04-24 01:46:26.840556] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:16:26.926 [2024-04-24 01:46:26.840917] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:16:26.926 [2024-04-24 01:46:26.841068] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:16:26.926 [2024-04-24 01:46:26.841129] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:16:26.926 [2024-04-24 01:46:26.841222] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:16:26.926 [2024-04-24 01:46:26.841267] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:16:26.926 passed 00:16:26.926 00:16:26.926 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.926 suites 1 1 n/a 0 0 00:16:26.926 tests 2 2 2 0 0 00:16:26.926 asserts 107 107 107 0 n/a 00:16:26.926 00:16:26.926 Elapsed time = 0.006 seconds 00:16:26.926 00:16:26.926 real 0m0.046s 00:16:26.926 user 0m0.033s 00:16:26.926 sys 0m0.013s 00:16:26.926 01:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.926 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.926 ************************************ 00:16:26.926 END TEST unittest_iobuf 00:16:26.926 ************************************ 00:16:26.926 01:46:26 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:16:26.926 01:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.926 01:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.926 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.926 ************************************ 00:16:26.926 START TEST unittest_util 00:16:26.926 ************************************ 00:16:26.926 01:46:26 -- common/autotest_common.sh@1111 -- # unittest_util 00:16:26.926 01:46:26 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:16:26.926 00:16:26.926 00:16:26.926 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.926 http://cunit.sourceforge.net/ 00:16:26.926 00:16:26.926 00:16:26.926 Suite: base64 00:16:26.926 Test: test_base64_get_encoded_strlen ...passed 00:16:26.926 Test: test_base64_get_decoded_len ...passed 00:16:26.926 Test: test_base64_encode ...passed 00:16:26.926 Test: test_base64_decode ...passed 00:16:26.926 Test: test_base64_urlsafe_encode ...passed 00:16:26.926 Test: test_base64_urlsafe_decode ...passed 00:16:26.926 00:16:26.926 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.926 suites 1 1 n/a 0 0 00:16:26.926 tests 6 6 6 0 0 00:16:26.926 asserts 112 112 112 0 n/a 00:16:26.926 00:16:26.926 Elapsed time = 0.000 seconds 00:16:26.926 01:46:27 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:16:27.185 00:16:27.185 00:16:27.185 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.185 http://cunit.sourceforge.net/ 00:16:27.185 00:16:27.185 00:16:27.185 Suite: bit_array 00:16:27.185 Test: test_1bit ...passed 00:16:27.185 Test: test_64bit ...passed 00:16:27.185 Test: test_find ...passed 00:16:27.186 Test: test_resize ...passed 00:16:27.186 Test: test_errors ...passed 00:16:27.186 Test: test_count ...passed 00:16:27.186 Test: test_mask_store_load ...passed 00:16:27.186 Test: test_mask_clear ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 8 8 8 0 0 00:16:27.186 asserts 5075 5075 5075 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.002 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: cpuset 00:16:27.186 Test: test_cpuset ...passed 00:16:27.186 Test: test_cpuset_parse ...[2024-04-24 01:46:27.062346] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:16:27.186 [2024-04-24 01:46:27.062709] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:16:27.186 [2024-04-24 01:46:27.062806] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:16:27.186 [2024-04-24 01:46:27.062906] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:16:27.186 [2024-04-24 01:46:27.062949] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:16:27.186 [2024-04-24 01:46:27.063000] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:16:27.186 [2024-04-24 01:46:27.063044] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:16:27.186 [2024-04-24 01:46:27.063107] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:16:27.186 passed 00:16:27.186 Test: test_cpuset_fmt ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 3 3 3 0 0 00:16:27.186 asserts 65 65 65 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.002 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: crc16 00:16:27.186 Test: test_crc16_t10dif ...passed 00:16:27.186 Test: test_crc16_t10dif_seed ...passed 00:16:27.186 Test: test_crc16_t10dif_copy ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 3 3 3 0 0 00:16:27.186 asserts 5 5 5 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.000 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: crc32_ieee 00:16:27.186 Test: test_crc32_ieee ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 1 1 1 0 0 00:16:27.186 asserts 1 1 1 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.000 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: crc32c 00:16:27.186 Test: test_crc32c ...passed 00:16:27.186 Test: test_crc32c_nvme ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 2 2 2 0 0 00:16:27.186 asserts 16 16 16 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.000 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: crc64 00:16:27.186 Test: test_crc64_nvme ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 1 1 1 0 0 00:16:27.186 asserts 4 4 4 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.000 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: string 00:16:27.186 Test: test_parse_ip_addr ...passed 00:16:27.186 Test: test_str_chomp ...passed 00:16:27.186 Test: test_parse_capacity ...passed 00:16:27.186 Test: test_sprintf_append_realloc ...passed 00:16:27.186 Test: test_strtol ...passed 00:16:27.186 Test: test_strtoll ...passed 00:16:27.186 Test: test_strarray ...passed 00:16:27.186 Test: test_strcpy_replace ...passed 00:16:27.186 00:16:27.186 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.186 suites 1 1 n/a 0 0 00:16:27.186 tests 8 8 8 0 0 00:16:27.186 asserts 161 161 161 0 n/a 00:16:27.186 00:16:27.186 Elapsed time = 0.001 seconds 00:16:27.186 01:46:27 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:16:27.186 00:16:27.186 00:16:27.186 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.186 http://cunit.sourceforge.net/ 00:16:27.186 00:16:27.186 00:16:27.186 Suite: dif 00:16:27.447 Test: dif_generate_and_verify_test ...[2024-04-24 01:46:27.270425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:16:27.447 [2024-04-24 01:46:27.270997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:16:27.447 [2024-04-24 01:46:27.271301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:16:27.447 [2024-04-24 01:46:27.271607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:16:27.447 [2024-04-24 01:46:27.271958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:16:27.447 [2024-04-24 01:46:27.272284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:16:27.447 passed 00:16:27.447 Test: dif_disable_check_test ...[2024-04-24 01:46:27.273344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:16:27.447 [2024-04-24 01:46:27.273674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:16:27.447 [2024-04-24 01:46:27.273972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:16:27.447 passed 00:16:27.447 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-24 01:46:27.275071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:16:27.447 [2024-04-24 01:46:27.275402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:16:27.447 [2024-04-24 01:46:27.275735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:16:27.447 [2024-04-24 01:46:27.276291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:16:27.447 [2024-04-24 01:46:27.276663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:16:27.447 [2024-04-24 01:46:27.277077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:16:27.447 [2024-04-24 01:46:27.277512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:16:27.447 [2024-04-24 01:46:27.277947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:16:27.447 [2024-04-24 01:46:27.278391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:16:27.448 [2024-04-24 01:46:27.278865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:16:27.448 [2024-04-24 01:46:27.279314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:16:27.448 passed 00:16:27.448 Test: dif_apptag_mask_test ...[2024-04-24 01:46:27.279759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:16:27.448 [2024-04-24 01:46:27.280193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:16:27.448 passed 00:16:27.448 Test: dif_sec_512_md_0_error_test ...[2024-04-24 01:46:27.280500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:16:27.448 passed 00:16:27.448 Test: dif_sec_4096_md_0_error_test ...[2024-04-24 01:46:27.280583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:16:27.448 passed 00:16:27.448 Test: dif_sec_4100_md_128_error_test ...[2024-04-24 01:46:27.280662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:16:27.448 [2024-04-24 01:46:27.280752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:16:27.448 passed 00:16:27.448 Test: dif_guard_seed_test ...[2024-04-24 01:46:27.280824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:16:27.448 passed 00:16:27.448 Test: dif_guard_value_test ...passed 00:16:27.448 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:16:27.448 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:16:27.448 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 01:46:27.330317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fdcc, Actual=fd4c 00:16:27.448 [2024-04-24 01:46:27.333068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fea1, Actual=fe21 00:16:27.448 [2024-04-24 01:46:27.335797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.338588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.341403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.448 [2024-04-24 01:46:27.344149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.448 [2024-04-24 01:46:27.346893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=c28f 00:16:27.448 [2024-04-24 01:46:27.349352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fe21, Actual=a2c3 00:16:27.448 [2024-04-24 01:46:27.351827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1a3753ed, Actual=1ab753ed 00:16:27.448 [2024-04-24 01:46:27.354653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38d74660, Actual=38574660 00:16:27.448 [2024-04-24 01:46:27.357417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.360158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.362884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.448 [2024-04-24 01:46:27.365642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.448 [2024-04-24 01:46:27.368449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=72375c0b 00:16:27.448 [2024-04-24 01:46:27.370910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38574660, Actual=f77df3c7 00:16:27.448 [2024-04-24 01:46:27.373450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.448 [2024-04-24 01:46:27.376229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.448 [2024-04-24 01:46:27.378975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.381745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.384349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.448 [2024-04-24 01:46:27.386864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.448 [2024-04-24 01:46:27.389403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.448 [2024-04-24 01:46:27.391659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.448 passed 00:16:27.448 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-24 01:46:27.393032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.448 [2024-04-24 01:46:27.393348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:16:27.448 [2024-04-24 01:46:27.393659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.393982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.448 [2024-04-24 01:46:27.394326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.448 [2024-04-24 01:46:27.394655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.448 [2024-04-24 01:46:27.394970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.448 [2024-04-24 01:46:27.395163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a2c3 00:16:27.448 [2024-04-24 01:46:27.395369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.449 [2024-04-24 01:46:27.395685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:16:27.449 [2024-04-24 01:46:27.396022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.396355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.396676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.396969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.397282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.449 [2024-04-24 01:46:27.397471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f77df3c7 00:16:27.449 [2024-04-24 01:46:27.397693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.449 [2024-04-24 01:46:27.398005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.449 [2024-04-24 01:46:27.398330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.398651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.398980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.399294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.399629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.449 [2024-04-24 01:46:27.399841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.449 passed 00:16:27.449 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-24 01:46:27.400098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.449 [2024-04-24 01:46:27.400435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:16:27.449 [2024-04-24 01:46:27.400743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.401062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.401400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.401720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.402028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.449 [2024-04-24 01:46:27.402238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a2c3 00:16:27.449 [2024-04-24 01:46:27.402434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.449 [2024-04-24 01:46:27.402765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:16:27.449 [2024-04-24 01:46:27.403083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.403399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.403717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.404035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.404370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.449 [2024-04-24 01:46:27.404577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f77df3c7 00:16:27.449 [2024-04-24 01:46:27.404803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.449 [2024-04-24 01:46:27.405116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.449 [2024-04-24 01:46:27.405447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.405770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.406091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.406407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.406753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.449 [2024-04-24 01:46:27.406957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.449 passed 00:16:27.449 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-24 01:46:27.407209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.449 [2024-04-24 01:46:27.407541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:16:27.449 [2024-04-24 01:46:27.407860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.408179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.408524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.408820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.409144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.449 [2024-04-24 01:46:27.409350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a2c3 00:16:27.449 [2024-04-24 01:46:27.409559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.449 [2024-04-24 01:46:27.409871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:16:27.449 [2024-04-24 01:46:27.410207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.410538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.410855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.411153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.449 [2024-04-24 01:46:27.411462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.449 [2024-04-24 01:46:27.411664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f77df3c7 00:16:27.449 [2024-04-24 01:46:27.411859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.449 [2024-04-24 01:46:27.412184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.449 [2024-04-24 01:46:27.412479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.412796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.449 [2024-04-24 01:46:27.413106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.413423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.449 [2024-04-24 01:46:27.413745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.450 [2024-04-24 01:46:27.413939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.450 passed 00:16:27.450 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-24 01:46:27.414182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.450 [2024-04-24 01:46:27.414483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:16:27.450 [2024-04-24 01:46:27.414805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.415122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.415455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.415766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.416082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.450 [2024-04-24 01:46:27.416286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a2c3 00:16:27.450 passed 00:16:27.450 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-24 01:46:27.416539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.450 [2024-04-24 01:46:27.416849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:16:27.450 [2024-04-24 01:46:27.417177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.417480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.417796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.450 [2024-04-24 01:46:27.418103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.450 [2024-04-24 01:46:27.418401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.450 [2024-04-24 01:46:27.418609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f77df3c7 00:16:27.450 [2024-04-24 01:46:27.418857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.450 [2024-04-24 01:46:27.419174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.450 [2024-04-24 01:46:27.419487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.419801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.420128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.420441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.420766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.450 [2024-04-24 01:46:27.420968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.450 passed 00:16:27.450 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-24 01:46:27.421185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.450 [2024-04-24 01:46:27.421496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:16:27.450 [2024-04-24 01:46:27.421799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.422112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.422434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.422749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.423053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.450 [2024-04-24 01:46:27.423257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a2c3 00:16:27.450 passed 00:16:27.450 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-24 01:46:27.423507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.450 [2024-04-24 01:46:27.423814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:16:27.450 [2024-04-24 01:46:27.424156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.424474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.424791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.450 [2024-04-24 01:46:27.425092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.450 [2024-04-24 01:46:27.425399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.450 [2024-04-24 01:46:27.425587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f77df3c7 00:16:27.450 [2024-04-24 01:46:27.425835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.450 [2024-04-24 01:46:27.426148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:16:27.450 [2024-04-24 01:46:27.426464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.426770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.427086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.427387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.450 [2024-04-24 01:46:27.427707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.450 [2024-04-24 01:46:27.427906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=76f3679d60029b68 00:16:27.450 passed 00:16:27.450 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:16:27.450 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:16:27.450 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:16:27.450 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:16:27.450 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 01:46:27.472536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fdcc, Actual=fd4c 00:16:27.450 [2024-04-24 01:46:27.473662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=c6e, Actual=cee 00:16:27.450 [2024-04-24 01:46:27.474786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.475891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.450 [2024-04-24 01:46:27.477030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.450 [2024-04-24 01:46:27.478139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.450 [2024-04-24 01:46:27.479254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=c28f 00:16:27.450 [2024-04-24 01:46:27.480370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=b823 00:16:27.451 [2024-04-24 01:46:27.481482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1a3753ed, Actual=1ab753ed 00:16:27.451 [2024-04-24 01:46:27.482596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ddc273de, Actual=dd4273de 00:16:27.451 [2024-04-24 01:46:27.483713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.484860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.485973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.451 [2024-04-24 01:46:27.487101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.451 [2024-04-24 01:46:27.488215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=72375c0b 00:16:27.451 [2024-04-24 01:46:27.489334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=bb3c3d59 00:16:27.451 [2024-04-24 01:46:27.490436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.451 [2024-04-24 01:46:27.491596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=26856ef3c41b464b, Actual=26056ef3c41b464b 00:16:27.451 [2024-04-24 01:46:27.492718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.493838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.494958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.451 [2024-04-24 01:46:27.496075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.451 [2024-04-24 01:46:27.497195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.451 [2024-04-24 01:46:27.498333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=b275a546a53bec51 00:16:27.451 passed 00:16:27.451 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-24 01:46:27.498727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.451 [2024-04-24 01:46:27.499005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c5b9, Actual=c539 00:16:27.451 [2024-04-24 01:46:27.499280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.499538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.499841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.451 [2024-04-24 01:46:27.500146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.451 [2024-04-24 01:46:27.500413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.451 [2024-04-24 01:46:27.500669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=71f4 00:16:27.451 [2024-04-24 01:46:27.500936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.451 [2024-04-24 01:46:27.501217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=4372bb8c, Actual=43f2bb8c 00:16:27.451 [2024-04-24 01:46:27.501496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.501767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.502040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.451 [2024-04-24 01:46:27.502309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.451 [2024-04-24 01:46:27.502573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.451 [2024-04-24 01:46:27.502852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=258cf50b 00:16:27.451 [2024-04-24 01:46:27.503145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.451 [2024-04-24 01:46:27.503418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=52648c1e1fcfa3b6, Actual=52e48c1e1fcfa3b6 00:16:27.451 [2024-04-24 01:46:27.503707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.503969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.451 [2024-04-24 01:46:27.504257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.451 [2024-04-24 01:46:27.504521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.451 [2024-04-24 01:46:27.504812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.451 [2024-04-24 01:46:27.505074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c69447ab7eef09ac 00:16:27.451 passed 00:16:27.451 Test: dix_sec_512_md_0_error ...[2024-04-24 01:46:27.505149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:16:27.451 passed 00:16:27.451 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:16:27.451 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:16:27.451 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:16:27.451 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:16:27.713 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:16:27.713 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:16:27.713 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:16:27.713 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:16:27.713 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:16:27.713 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 01:46:27.549222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fdcc, Actual=fd4c 00:16:27.713 [2024-04-24 01:46:27.550355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=c6e, Actual=cee 00:16:27.713 [2024-04-24 01:46:27.551483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.552601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.553726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.713 [2024-04-24 01:46:27.554850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.713 [2024-04-24 01:46:27.555949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=c28f 00:16:27.713 [2024-04-24 01:46:27.557090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=b823 00:16:27.713 [2024-04-24 01:46:27.558197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1a3753ed, Actual=1ab753ed 00:16:27.713 [2024-04-24 01:46:27.559318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ddc273de, Actual=dd4273de 00:16:27.713 [2024-04-24 01:46:27.560466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.561584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.562718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.713 [2024-04-24 01:46:27.563827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=80000000000060 00:16:27.713 [2024-04-24 01:46:27.564945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=72375c0b 00:16:27.713 [2024-04-24 01:46:27.566054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=bb3c3d59 00:16:27.713 [2024-04-24 01:46:27.567192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.713 [2024-04-24 01:46:27.568295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=26856ef3c41b464b, Actual=26056ef3c41b464b 00:16:27.713 [2024-04-24 01:46:27.569408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.570517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.571628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.713 [2024-04-24 01:46:27.572751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800060 00:16:27.713 [2024-04-24 01:46:27.573882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.713 [2024-04-24 01:46:27.574990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=b275a546a53bec51 00:16:27.713 passed 00:16:27.713 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-24 01:46:27.575370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:16:27.713 [2024-04-24 01:46:27.575638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c5b9, Actual=c539 00:16:27.713 [2024-04-24 01:46:27.575918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.576209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.576506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.713 [2024-04-24 01:46:27.576780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.713 [2024-04-24 01:46:27.577052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c28f 00:16:27.713 [2024-04-24 01:46:27.577306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=71f4 00:16:27.713 [2024-04-24 01:46:27.577582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:16:27.713 [2024-04-24 01:46:27.577873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=4372bb8c, Actual=43f2bb8c 00:16:27.713 [2024-04-24 01:46:27.578156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.713 [2024-04-24 01:46:27.578434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.714 [2024-04-24 01:46:27.578708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.714 [2024-04-24 01:46:27.578974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:16:27.714 [2024-04-24 01:46:27.579237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=72375c0b 00:16:27.714 [2024-04-24 01:46:27.579511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=258cf50b 00:16:27.714 [2024-04-24 01:46:27.579788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:16:27.714 [2024-04-24 01:46:27.580059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=52648c1e1fcfa3b6, Actual=52e48c1e1fcfa3b6 00:16:27.714 [2024-04-24 01:46:27.580335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.714 [2024-04-24 01:46:27.580617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:16:27.714 [2024-04-24 01:46:27.580888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.714 [2024-04-24 01:46:27.581163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:16:27.714 [2024-04-24 01:46:27.581446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=81315fccc9976dde 00:16:27.714 [2024-04-24 01:46:27.581727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c69447ab7eef09ac 00:16:27.714 passed 00:16:27.714 Test: set_md_interleave_iovs_test ...passed 00:16:27.714 Test: set_md_interleave_iovs_split_test ...passed 00:16:27.714 Test: dif_generate_stream_pi_16_test ...passed 00:16:27.714 Test: dif_generate_stream_test ...passed 00:16:27.714 Test: set_md_interleave_iovs_alignment_test ...passed 00:16:27.714 Test: dif_generate_split_test ...[2024-04-24 01:46:27.589464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:16:27.714 passed 00:16:27.714 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:16:27.714 Test: dif_verify_split_test ...passed 00:16:27.714 Test: dif_verify_stream_multi_segments_test ...passed 00:16:27.714 Test: update_crc32c_pi_16_test ...passed 00:16:27.714 Test: update_crc32c_test ...passed 00:16:27.714 Test: dif_update_crc32c_split_test ...passed 00:16:27.714 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:16:27.714 Test: get_range_with_md_test ...passed 00:16:27.714 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:16:27.714 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:16:27.714 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:16:27.714 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:16:27.714 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:16:27.714 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:16:27.714 Test: dif_generate_and_verify_unmap_test ...passed 00:16:27.714 00:16:27.714 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.714 suites 1 1 n/a 0 0 00:16:27.714 tests 79 79 79 0 0 00:16:27.714 asserts 3584 3584 3584 0 n/a 00:16:27.714 00:16:27.714 Elapsed time = 0.366 seconds 00:16:27.714 01:46:27 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:16:27.714 00:16:27.714 00:16:27.714 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.714 http://cunit.sourceforge.net/ 00:16:27.714 00:16:27.714 00:16:27.714 Suite: iov 00:16:27.714 Test: test_single_iov ...passed 00:16:27.714 Test: test_simple_iov ...passed 00:16:27.714 Test: test_complex_iov ...passed 00:16:27.714 Test: test_iovs_to_buf ...passed 00:16:27.714 Test: test_buf_to_iovs ...passed 00:16:27.714 Test: test_memset ...passed 00:16:27.714 Test: test_iov_one ...passed 00:16:27.714 Test: test_iov_xfer ...passed 00:16:27.714 00:16:27.714 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.714 suites 1 1 n/a 0 0 00:16:27.714 tests 8 8 8 0 0 00:16:27.714 asserts 156 156 156 0 n/a 00:16:27.714 00:16:27.714 Elapsed time = 0.000 seconds 00:16:27.714 01:46:27 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:16:27.714 00:16:27.714 00:16:27.714 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.714 http://cunit.sourceforge.net/ 00:16:27.714 00:16:27.714 00:16:27.714 Suite: math 00:16:27.714 Test: test_serial_number_arithmetic ...passed 00:16:27.714 Suite: erase 00:16:27.714 Test: test_memset_s ...passed 00:16:27.714 00:16:27.714 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.714 suites 2 2 n/a 0 0 00:16:27.714 tests 2 2 2 0 0 00:16:27.714 asserts 18 18 18 0 n/a 00:16:27.714 00:16:27.714 Elapsed time = 0.000 seconds 00:16:27.714 01:46:27 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:16:27.714 00:16:27.714 00:16:27.714 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.714 http://cunit.sourceforge.net/ 00:16:27.714 00:16:27.714 00:16:27.714 Suite: pipe 00:16:27.714 Test: test_create_destroy ...passed 00:16:27.714 Test: test_write_get_buffer ...passed 00:16:27.714 Test: test_write_advance ...passed 00:16:27.714 Test: test_read_get_buffer ...passed 00:16:27.714 Test: test_read_advance ...passed 00:16:27.714 Test: test_data ...passed 00:16:27.714 00:16:27.714 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.714 suites 1 1 n/a 0 0 00:16:27.714 tests 6 6 6 0 0 00:16:27.714 asserts 251 251 251 0 n/a 00:16:27.714 00:16:27.714 Elapsed time = 0.000 seconds 00:16:27.714 01:46:27 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:16:27.714 00:16:27.714 00:16:27.714 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.714 http://cunit.sourceforge.net/ 00:16:27.714 00:16:27.714 00:16:27.714 Suite: xor 00:16:27.714 Test: test_xor_gen ...passed 00:16:27.714 00:16:27.714 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.714 suites 1 1 n/a 0 0 00:16:27.714 tests 1 1 1 0 0 00:16:27.714 asserts 17 17 17 0 n/a 00:16:27.714 00:16:27.714 Elapsed time = 0.007 seconds 00:16:27.974 00:16:27.974 real 0m0.841s 00:16:27.974 user 0m0.597s 00:16:27.974 sys 0m0.249s 00:16:27.974 01:46:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.974 01:46:27 -- common/autotest_common.sh@10 -- # set +x 00:16:27.974 ************************************ 00:16:27.974 END TEST unittest_util 00:16:27.974 ************************************ 00:16:27.974 01:46:27 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:16:27.974 01:46:27 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:16:27.974 01:46:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.974 01:46:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.974 01:46:27 -- common/autotest_common.sh@10 -- # set +x 00:16:27.974 ************************************ 00:16:27.974 START TEST unittest_vhost 00:16:27.974 ************************************ 00:16:27.974 01:46:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:16:27.974 00:16:27.974 00:16:27.974 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.974 http://cunit.sourceforge.net/ 00:16:27.974 00:16:27.974 00:16:27.974 Suite: vhost_suite 00:16:27.974 Test: desc_to_iov_test ...[2024-04-24 01:46:27.945884] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:16:27.974 passed 00:16:27.974 Test: create_controller_test ...[2024-04-24 01:46:27.949185] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:16:27.974 [2024-04-24 01:46:27.949280] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:16:27.974 [2024-04-24 01:46:27.949376] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:16:27.974 [2024-04-24 01:46:27.949447] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:16:27.974 [2024-04-24 01:46:27.949501] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:16:27.974 [2024-04-24 01:46:27.949582] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1782:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-24 01:46:27.950316] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:16:27.974 passed 00:16:27.974 Test: session_find_by_vid_test ...passed 00:16:27.974 Test: remove_controller_test ...[2024-04-24 01:46:27.951895] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1867:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:16:27.974 passed 00:16:27.974 Test: vq_avail_ring_get_test ...passed 00:16:27.974 Test: vq_packed_ring_test ...passed 00:16:27.974 Test: vhost_blk_construct_test ...passed 00:16:27.974 00:16:27.974 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.974 suites 1 1 n/a 0 0 00:16:27.974 tests 7 7 7 0 0 00:16:27.974 asserts 147 147 147 0 n/a 00:16:27.974 00:16:27.974 Elapsed time = 0.009 seconds 00:16:27.974 00:16:27.974 real 0m0.060s 00:16:27.974 user 0m0.027s 00:16:27.974 sys 0m0.034s 00:16:27.974 01:46:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.974 01:46:27 -- common/autotest_common.sh@10 -- # set +x 00:16:27.974 ************************************ 00:16:27.974 END TEST unittest_vhost 00:16:27.974 ************************************ 00:16:27.974 01:46:28 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:16:27.974 01:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.974 01:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.974 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.234 ************************************ 00:16:28.234 START TEST unittest_dma 00:16:28.234 ************************************ 00:16:28.234 01:46:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:16:28.234 00:16:28.234 00:16:28.234 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.234 http://cunit.sourceforge.net/ 00:16:28.234 00:16:28.234 00:16:28.234 Suite: dma_suite 00:16:28.234 Test: test_dma ...[2024-04-24 01:46:28.105170] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:16:28.234 passed 00:16:28.234 00:16:28.234 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.234 suites 1 1 n/a 0 0 00:16:28.234 tests 1 1 1 0 0 00:16:28.234 asserts 54 54 54 0 n/a 00:16:28.234 00:16:28.234 Elapsed time = 0.000 seconds 00:16:28.234 00:16:28.234 real 0m0.038s 00:16:28.234 user 0m0.023s 00:16:28.234 sys 0m0.015s 00:16:28.234 01:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.234 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.234 ************************************ 00:16:28.234 END TEST unittest_dma 00:16:28.234 ************************************ 00:16:28.234 01:46:28 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:16:28.234 01:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:28.234 01:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.234 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.234 ************************************ 00:16:28.234 START TEST unittest_init 00:16:28.234 ************************************ 00:16:28.234 01:46:28 -- common/autotest_common.sh@1111 -- # unittest_init 00:16:28.234 01:46:28 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:16:28.234 00:16:28.234 00:16:28.234 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.235 http://cunit.sourceforge.net/ 00:16:28.235 00:16:28.235 00:16:28.235 Suite: subsystem_suite 00:16:28.235 Test: subsystem_sort_test_depends_on_single ...passed 00:16:28.235 Test: subsystem_sort_test_depends_on_multiple ...passed 00:16:28.235 Test: subsystem_sort_test_missing_dependency ...[2024-04-24 01:46:28.259107] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:16:28.235 passed 00:16:28.235 00:16:28.235 [2024-04-24 01:46:28.259451] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:16:28.235 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.235 suites 1 1 n/a 0 0 00:16:28.235 tests 3 3 3 0 0 00:16:28.235 asserts 20 20 20 0 n/a 00:16:28.235 00:16:28.235 Elapsed time = 0.001 seconds 00:16:28.235 00:16:28.235 real 0m0.041s 00:16:28.235 user 0m0.020s 00:16:28.235 sys 0m0.021s 00:16:28.235 01:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.235 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.235 ************************************ 00:16:28.235 END TEST unittest_init 00:16:28.235 ************************************ 00:16:28.493 01:46:28 -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:16:28.493 01:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:28.493 01:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.493 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.493 ************************************ 00:16:28.493 START TEST unittest_keyring 00:16:28.493 ************************************ 00:16:28.493 01:46:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:16:28.493 00:16:28.493 00:16:28.493 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.493 http://cunit.sourceforge.net/ 00:16:28.493 00:16:28.493 00:16:28.493 Suite: keyring 00:16:28.493 Test: test_keyring_add_remove ...[2024-04-24 01:46:28.381541] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:16:28.493 [2024-04-24 01:46:28.381896] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:16:28.493 [2024-04-24 01:46:28.382038] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:28.493 passed 00:16:28.493 Test: test_keyring_get_put ...passed 00:16:28.493 00:16:28.493 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.493 suites 1 1 n/a 0 0 00:16:28.493 tests 2 2 2 0 0 00:16:28.493 asserts 44 44 44 0 n/a 00:16:28.493 00:16:28.493 Elapsed time = 0.001 seconds 00:16:28.493 00:16:28.493 real 0m0.033s 00:16:28.493 user 0m0.017s 00:16:28.493 sys 0m0.016s 00:16:28.493 01:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.493 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:28.493 ************************************ 00:16:28.493 END TEST unittest_keyring 00:16:28.493 ************************************ 00:16:28.493 01:46:28 -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:16:28.493 01:46:28 -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:16:28.493 01:46:28 -- unit/unittest.sh@291 -- # hostname 00:16:28.493 01:46:28 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:16:28.752 geninfo: WARNING: invalid characters removed from testname! 00:16:55.317 01:46:54 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:16:59.594 01:46:59 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:02.929 01:47:02 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:05.457 01:47:04 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:07.986 01:47:07 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:10.576 01:47:10 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:13.105 01:47:12 -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:15.079 01:47:15 -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:17:15.079 01:47:15 -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:17:15.648 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:17:15.648 Found 316 entries. 00:17:15.648 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:17:15.648 Writing .css and .png files. 00:17:15.648 Generating output. 00:17:15.906 Processing file include/linux/virtio_ring.h 00:17:16.164 Processing file include/spdk/mmio.h 00:17:16.164 Processing file include/spdk/histogram_data.h 00:17:16.164 Processing file include/spdk/thread.h 00:17:16.164 Processing file include/spdk/base64.h 00:17:16.164 Processing file include/spdk/bdev_module.h 00:17:16.164 Processing file include/spdk/nvme.h 00:17:16.164 Processing file include/spdk/endian.h 00:17:16.164 Processing file include/spdk/nvme_spec.h 00:17:16.164 Processing file include/spdk/util.h 00:17:16.164 Processing file include/spdk/nvmf_transport.h 00:17:16.164 Processing file include/spdk/trace.h 00:17:16.164 Processing file include/spdk_internal/virtio.h 00:17:16.164 Processing file include/spdk_internal/rdma.h 00:17:16.164 Processing file include/spdk_internal/sock.h 00:17:16.164 Processing file include/spdk_internal/nvme_tcp.h 00:17:16.164 Processing file include/spdk_internal/sgl.h 00:17:16.164 Processing file include/spdk_internal/utf.h 00:17:16.450 Processing file lib/accel/accel_rpc.c 00:17:16.450 Processing file lib/accel/accel_sw.c 00:17:16.450 Processing file lib/accel/accel.c 00:17:16.450 Processing file lib/bdev/bdev.c 00:17:16.450 Processing file lib/bdev/part.c 00:17:16.450 Processing file lib/bdev/scsi_nvme.c 00:17:16.450 Processing file lib/bdev/bdev_zone.c 00:17:16.450 Processing file lib/bdev/bdev_rpc.c 00:17:16.708 Processing file lib/blob/blobstore.c 00:17:16.708 Processing file lib/blob/zeroes.c 00:17:16.708 Processing file lib/blob/request.c 00:17:16.708 Processing file lib/blob/blob_bs_dev.c 00:17:16.708 Processing file lib/blob/blobstore.h 00:17:16.968 Processing file lib/blobfs/blobfs.c 00:17:16.968 Processing file lib/blobfs/tree.c 00:17:16.968 Processing file lib/conf/conf.c 00:17:16.968 Processing file lib/dma/dma.c 00:17:17.227 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:17:17.227 Processing file lib/env_dpdk/pci_vmd.c 00:17:17.227 Processing file lib/env_dpdk/threads.c 00:17:17.227 Processing file lib/env_dpdk/env.c 00:17:17.227 Processing file lib/env_dpdk/init.c 00:17:17.227 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:17:17.227 Processing file lib/env_dpdk/pci.c 00:17:17.227 Processing file lib/env_dpdk/pci_ioat.c 00:17:17.227 Processing file lib/env_dpdk/memory.c 00:17:17.227 Processing file lib/env_dpdk/pci_dpdk.c 00:17:17.227 Processing file lib/env_dpdk/sigbus_handler.c 00:17:17.227 Processing file lib/env_dpdk/pci_idxd.c 00:17:17.227 Processing file lib/env_dpdk/pci_virtio.c 00:17:17.227 Processing file lib/env_dpdk/pci_event.c 00:17:17.227 Processing file lib/event/app_rpc.c 00:17:17.227 Processing file lib/event/scheduler_static.c 00:17:17.227 Processing file lib/event/log_rpc.c 00:17:17.227 Processing file lib/event/app.c 00:17:17.227 Processing file lib/event/reactor.c 00:17:17.794 Processing file lib/ftl/ftl_io.c 00:17:17.794 Processing file lib/ftl/ftl_writer.h 00:17:17.794 Processing file lib/ftl/ftl_writer.c 00:17:17.794 Processing file lib/ftl/ftl_nv_cache.c 00:17:17.794 Processing file lib/ftl/ftl_nv_cache_io.h 00:17:17.794 Processing file lib/ftl/ftl_core.h 00:17:17.794 Processing file lib/ftl/ftl_rq.c 00:17:17.794 Processing file lib/ftl/ftl_debug.h 00:17:17.794 Processing file lib/ftl/ftl_trace.c 00:17:17.794 Processing file lib/ftl/ftl_sb.c 00:17:17.794 Processing file lib/ftl/ftl_layout.c 00:17:17.794 Processing file lib/ftl/ftl_p2l.c 00:17:17.794 Processing file lib/ftl/ftl_band.h 00:17:17.794 Processing file lib/ftl/ftl_core.c 00:17:17.794 Processing file lib/ftl/ftl_reloc.c 00:17:17.794 Processing file lib/ftl/ftl_band.c 00:17:17.794 Processing file lib/ftl/ftl_nv_cache.h 00:17:17.794 Processing file lib/ftl/ftl_init.c 00:17:17.794 Processing file lib/ftl/ftl_l2p.c 00:17:17.794 Processing file lib/ftl/ftl_io.h 00:17:17.794 Processing file lib/ftl/ftl_l2p_flat.c 00:17:17.794 Processing file lib/ftl/ftl_l2p_cache.c 00:17:17.794 Processing file lib/ftl/ftl_debug.c 00:17:17.794 Processing file lib/ftl/ftl_band_ops.c 00:17:17.794 Processing file lib/ftl/base/ftl_base_dev.c 00:17:17.794 Processing file lib/ftl/base/ftl_base_bdev.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt.c 00:17:18.158 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:17:18.158 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:17:18.158 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:17:18.416 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:17:18.416 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:17:18.416 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:17:18.416 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:17:18.416 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:17:18.416 Processing file lib/ftl/utils/ftl_mempool.c 00:17:18.416 Processing file lib/ftl/utils/ftl_md.c 00:17:18.416 Processing file lib/ftl/utils/ftl_property.c 00:17:18.416 Processing file lib/ftl/utils/ftl_df.h 00:17:18.416 Processing file lib/ftl/utils/ftl_conf.c 00:17:18.416 Processing file lib/ftl/utils/ftl_property.h 00:17:18.416 Processing file lib/ftl/utils/ftl_bitmap.c 00:17:18.416 Processing file lib/ftl/utils/ftl_addr_utils.h 00:17:18.674 Processing file lib/idxd/idxd_internal.h 00:17:18.674 Processing file lib/idxd/idxd.c 00:17:18.674 Processing file lib/idxd/idxd_user.c 00:17:18.674 Processing file lib/init/rpc.c 00:17:18.674 Processing file lib/init/json_config.c 00:17:18.674 Processing file lib/init/subsystem_rpc.c 00:17:18.674 Processing file lib/init/subsystem.c 00:17:18.674 Processing file lib/ioat/ioat_internal.h 00:17:18.674 Processing file lib/ioat/ioat.c 00:17:19.240 Processing file lib/iscsi/init_grp.c 00:17:19.240 Processing file lib/iscsi/param.c 00:17:19.240 Processing file lib/iscsi/iscsi.c 00:17:19.240 Processing file lib/iscsi/md5.c 00:17:19.240 Processing file lib/iscsi/tgt_node.c 00:17:19.240 Processing file lib/iscsi/conn.c 00:17:19.240 Processing file lib/iscsi/iscsi_rpc.c 00:17:19.240 Processing file lib/iscsi/task.h 00:17:19.240 Processing file lib/iscsi/portal_grp.c 00:17:19.240 Processing file lib/iscsi/task.c 00:17:19.240 Processing file lib/iscsi/iscsi_subsystem.c 00:17:19.240 Processing file lib/iscsi/iscsi.h 00:17:19.240 Processing file lib/json/json_util.c 00:17:19.240 Processing file lib/json/json_parse.c 00:17:19.240 Processing file lib/json/json_write.c 00:17:19.240 Processing file lib/jsonrpc/jsonrpc_server.c 00:17:19.240 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:17:19.240 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:17:19.240 Processing file lib/jsonrpc/jsonrpc_client.c 00:17:19.498 Processing file lib/keyring/keyring.c 00:17:19.498 Processing file lib/keyring/keyring_rpc.c 00:17:19.498 Processing file lib/log/log.c 00:17:19.498 Processing file lib/log/log_deprecated.c 00:17:19.498 Processing file lib/log/log_flags.c 00:17:19.498 Processing file lib/lvol/lvol.c 00:17:19.757 Processing file lib/nbd/nbd.c 00:17:19.757 Processing file lib/nbd/nbd_rpc.c 00:17:19.757 Processing file lib/notify/notify.c 00:17:19.757 Processing file lib/notify/notify_rpc.c 00:17:20.323 Processing file lib/nvme/nvme_quirks.c 00:17:20.323 Processing file lib/nvme/nvme_rdma.c 00:17:20.323 Processing file lib/nvme/nvme_io_msg.c 00:17:20.323 Processing file lib/nvme/nvme_transport.c 00:17:20.323 Processing file lib/nvme/nvme_poll_group.c 00:17:20.323 Processing file lib/nvme/nvme_tcp.c 00:17:20.323 Processing file lib/nvme/nvme_zns.c 00:17:20.323 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:17:20.323 Processing file lib/nvme/nvme_qpair.c 00:17:20.323 Processing file lib/nvme/nvme_auth.c 00:17:20.323 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:17:20.323 Processing file lib/nvme/nvme_fabric.c 00:17:20.323 Processing file lib/nvme/nvme_ctrlr.c 00:17:20.323 Processing file lib/nvme/nvme.c 00:17:20.323 Processing file lib/nvme/nvme_internal.h 00:17:20.323 Processing file lib/nvme/nvme_cuse.c 00:17:20.323 Processing file lib/nvme/nvme_opal.c 00:17:20.323 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:17:20.323 Processing file lib/nvme/nvme_discovery.c 00:17:20.323 Processing file lib/nvme/nvme_pcie_internal.h 00:17:20.323 Processing file lib/nvme/nvme_ns_cmd.c 00:17:20.323 Processing file lib/nvme/nvme_pcie_common.c 00:17:20.323 Processing file lib/nvme/nvme_pcie.c 00:17:20.323 Processing file lib/nvme/nvme_ns.c 00:17:20.889 Processing file lib/nvmf/nvmf_internal.h 00:17:20.889 Processing file lib/nvmf/subsystem.c 00:17:20.889 Processing file lib/nvmf/tcp.c 00:17:20.889 Processing file lib/nvmf/nvmf_rpc.c 00:17:20.889 Processing file lib/nvmf/ctrlr_bdev.c 00:17:20.889 Processing file lib/nvmf/ctrlr.c 00:17:20.889 Processing file lib/nvmf/ctrlr_discovery.c 00:17:20.889 Processing file lib/nvmf/rdma.c 00:17:20.889 Processing file lib/nvmf/nvmf.c 00:17:20.889 Processing file lib/nvmf/transport.c 00:17:20.889 Processing file lib/rdma/common.c 00:17:20.889 Processing file lib/rdma/rdma_verbs.c 00:17:21.149 Processing file lib/rpc/rpc.c 00:17:21.149 Processing file lib/scsi/scsi.c 00:17:21.149 Processing file lib/scsi/scsi_bdev.c 00:17:21.149 Processing file lib/scsi/dev.c 00:17:21.149 Processing file lib/scsi/task.c 00:17:21.149 Processing file lib/scsi/port.c 00:17:21.149 Processing file lib/scsi/lun.c 00:17:21.149 Processing file lib/scsi/scsi_pr.c 00:17:21.149 Processing file lib/scsi/scsi_rpc.c 00:17:21.407 Processing file lib/sock/sock_rpc.c 00:17:21.407 Processing file lib/sock/sock.c 00:17:21.407 Processing file lib/thread/iobuf.c 00:17:21.407 Processing file lib/thread/thread.c 00:17:21.666 Processing file lib/trace/trace_flags.c 00:17:21.666 Processing file lib/trace/trace.c 00:17:21.666 Processing file lib/trace/trace_rpc.c 00:17:21.666 Processing file lib/trace_parser/trace.cpp 00:17:21.666 Processing file lib/ut/ut.c 00:17:21.666 Processing file lib/ut_mock/mock.c 00:17:21.925 Processing file lib/util/dif.c 00:17:21.925 Processing file lib/util/pipe.c 00:17:21.925 Processing file lib/util/xor.c 00:17:21.925 Processing file lib/util/fd_group.c 00:17:21.925 Processing file lib/util/zipf.c 00:17:21.925 Processing file lib/util/crc16.c 00:17:21.925 Processing file lib/util/hexlify.c 00:17:21.925 Processing file lib/util/cpuset.c 00:17:21.925 Processing file lib/util/fd.c 00:17:21.925 Processing file lib/util/crc32.c 00:17:21.925 Processing file lib/util/file.c 00:17:21.925 Processing file lib/util/iov.c 00:17:21.925 Processing file lib/util/crc32_ieee.c 00:17:21.925 Processing file lib/util/strerror_tls.c 00:17:21.925 Processing file lib/util/base64.c 00:17:21.925 Processing file lib/util/uuid.c 00:17:21.925 Processing file lib/util/crc64.c 00:17:21.925 Processing file lib/util/math.c 00:17:21.925 Processing file lib/util/bit_array.c 00:17:21.925 Processing file lib/util/crc32c.c 00:17:21.925 Processing file lib/util/string.c 00:17:22.184 Processing file lib/vfio_user/host/vfio_user_pci.c 00:17:22.184 Processing file lib/vfio_user/host/vfio_user.c 00:17:22.442 Processing file lib/vhost/vhost_internal.h 00:17:22.442 Processing file lib/vhost/vhost.c 00:17:22.442 Processing file lib/vhost/vhost_scsi.c 00:17:22.442 Processing file lib/vhost/rte_vhost_user.c 00:17:22.442 Processing file lib/vhost/vhost_blk.c 00:17:22.442 Processing file lib/vhost/vhost_rpc.c 00:17:22.442 Processing file lib/virtio/virtio_pci.c 00:17:22.442 Processing file lib/virtio/virtio_vfio_user.c 00:17:22.442 Processing file lib/virtio/virtio.c 00:17:22.442 Processing file lib/virtio/virtio_vhost_user.c 00:17:22.442 Processing file lib/vmd/vmd.c 00:17:22.442 Processing file lib/vmd/led.c 00:17:22.700 Processing file module/accel/dsa/accel_dsa.c 00:17:22.700 Processing file module/accel/dsa/accel_dsa_rpc.c 00:17:22.700 Processing file module/accel/error/accel_error_rpc.c 00:17:22.700 Processing file module/accel/error/accel_error.c 00:17:22.700 Processing file module/accel/iaa/accel_iaa_rpc.c 00:17:22.700 Processing file module/accel/iaa/accel_iaa.c 00:17:22.700 Processing file module/accel/ioat/accel_ioat_rpc.c 00:17:22.700 Processing file module/accel/ioat/accel_ioat.c 00:17:22.957 Processing file module/bdev/aio/bdev_aio.c 00:17:22.957 Processing file module/bdev/aio/bdev_aio_rpc.c 00:17:22.957 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:17:22.957 Processing file module/bdev/delay/vbdev_delay.c 00:17:22.957 Processing file module/bdev/error/vbdev_error_rpc.c 00:17:22.957 Processing file module/bdev/error/vbdev_error.c 00:17:23.215 Processing file module/bdev/ftl/bdev_ftl.c 00:17:23.215 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:17:23.215 Processing file module/bdev/gpt/gpt.c 00:17:23.215 Processing file module/bdev/gpt/gpt.h 00:17:23.215 Processing file module/bdev/gpt/vbdev_gpt.c 00:17:23.215 Processing file module/bdev/iscsi/bdev_iscsi.c 00:17:23.215 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:17:23.474 Processing file module/bdev/lvol/vbdev_lvol.c 00:17:23.474 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:17:23.474 Processing file module/bdev/malloc/bdev_malloc.c 00:17:23.474 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:17:23.474 Processing file module/bdev/null/bdev_null_rpc.c 00:17:23.474 Processing file module/bdev/null/bdev_null.c 00:17:24.043 Processing file module/bdev/nvme/bdev_mdns_client.c 00:17:24.043 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:17:24.043 Processing file module/bdev/nvme/nvme_rpc.c 00:17:24.043 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:17:24.043 Processing file module/bdev/nvme/bdev_nvme.c 00:17:24.043 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:17:24.043 Processing file module/bdev/nvme/vbdev_opal.c 00:17:24.043 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:17:24.043 Processing file module/bdev/passthru/vbdev_passthru.c 00:17:24.043 Processing file module/bdev/raid/raid5f.c 00:17:24.043 Processing file module/bdev/raid/raid0.c 00:17:24.043 Processing file module/bdev/raid/bdev_raid.c 00:17:24.043 Processing file module/bdev/raid/bdev_raid.h 00:17:24.043 Processing file module/bdev/raid/bdev_raid_rpc.c 00:17:24.043 Processing file module/bdev/raid/bdev_raid_sb.c 00:17:24.043 Processing file module/bdev/raid/concat.c 00:17:24.043 Processing file module/bdev/raid/raid1.c 00:17:24.301 Processing file module/bdev/split/vbdev_split_rpc.c 00:17:24.301 Processing file module/bdev/split/vbdev_split.c 00:17:24.301 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:17:24.301 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:17:24.301 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:17:24.301 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:17:24.301 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:17:24.559 Processing file module/blob/bdev/blob_bdev.c 00:17:24.559 Processing file module/blobfs/bdev/blobfs_bdev.c 00:17:24.559 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:17:24.559 Processing file module/env_dpdk/env_dpdk_rpc.c 00:17:24.817 Processing file module/event/subsystems/accel/accel.c 00:17:24.817 Processing file module/event/subsystems/bdev/bdev.c 00:17:24.817 Processing file module/event/subsystems/iobuf/iobuf.c 00:17:24.817 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:17:24.817 Processing file module/event/subsystems/iscsi/iscsi.c 00:17:25.075 Processing file module/event/subsystems/keyring/keyring.c 00:17:25.075 Processing file module/event/subsystems/nbd/nbd.c 00:17:25.075 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:17:25.075 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:17:25.075 Processing file module/event/subsystems/scheduler/scheduler.c 00:17:25.333 Processing file module/event/subsystems/scsi/scsi.c 00:17:25.334 Processing file module/event/subsystems/sock/sock.c 00:17:25.334 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:17:25.334 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:17:25.592 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:17:25.592 Processing file module/event/subsystems/vmd/vmd.c 00:17:25.592 Processing file module/keyring/file/keyring_rpc.c 00:17:25.592 Processing file module/keyring/file/keyring.c 00:17:25.592 Processing file module/keyring/linux/keyring.c 00:17:25.592 Processing file module/keyring/linux/keyring_rpc.c 00:17:25.592 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:17:25.851 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:17:25.851 Processing file module/scheduler/gscheduler/gscheduler.c 00:17:25.851 Processing file module/sock/sock_kernel.h 00:17:25.851 Processing file module/sock/posix/posix.c 00:17:25.851 Writing directory view page. 00:17:25.851 Overall coverage rate: 00:17:25.851 lines......: 38.9% (39955 of 102730 lines) 00:17:25.851 functions..: 42.6% (3654 of 8572 functions) 00:17:25.851 00:17:25.851 00:17:25.851 ===================== 00:17:25.851 All unit tests passed 00:17:25.851 ===================== 00:17:25.851 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:17:25.851 01:47:25 -- unit/unittest.sh@303 -- # set +x 00:17:25.851 00:17:25.851 00:17:25.851 00:17:25.851 real 3m14.724s 00:17:25.851 user 2m45.308s 00:17:25.851 sys 0m20.235s 00:17:25.851 01:47:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.109 01:47:25 -- common/autotest_common.sh@10 -- # set +x 00:17:26.109 ************************************ 00:17:26.109 END TEST unittest 00:17:26.109 ************************************ 00:17:26.109 01:47:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:17:26.110 01:47:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:17:26.110 01:47:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:17:26.110 01:47:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:17:26.110 01:47:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.110 01:47:25 -- common/autotest_common.sh@10 -- # set +x 00:17:26.110 01:47:25 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:26.110 01:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.110 01:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.110 01:47:25 -- common/autotest_common.sh@10 -- # set +x 00:17:26.110 ************************************ 00:17:26.110 START TEST env 00:17:26.110 ************************************ 00:17:26.110 01:47:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:26.110 * Looking for test storage... 00:17:26.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:26.110 01:47:26 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:26.110 01:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.110 01:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.110 01:47:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.369 ************************************ 00:17:26.369 START TEST env_memory 00:17:26.369 ************************************ 00:17:26.369 01:47:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:26.369 00:17:26.369 00:17:26.369 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.369 http://cunit.sourceforge.net/ 00:17:26.369 00:17:26.369 00:17:26.369 Suite: memory 00:17:26.369 Test: alloc and free memory map ...[2024-04-24 01:47:26.264228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:26.369 passed 00:17:26.369 Test: mem map translation ...[2024-04-24 01:47:26.321047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:26.369 [2024-04-24 01:47:26.321237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:26.369 [2024-04-24 01:47:26.321391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:26.369 [2024-04-24 01:47:26.321488] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:26.369 passed 00:17:26.369 Test: mem map registration ...[2024-04-24 01:47:26.441407] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:17:26.369 [2024-04-24 01:47:26.441663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:17:26.628 passed 00:17:26.628 Test: mem map adjacent registrations ...passed 00:17:26.628 00:17:26.628 Run Summary: Type Total Ran Passed Failed Inactive 00:17:26.628 suites 1 1 n/a 0 0 00:17:26.628 tests 4 4 4 0 0 00:17:26.628 asserts 152 152 152 0 n/a 00:17:26.628 00:17:26.628 Elapsed time = 0.322 seconds 00:17:26.628 00:17:26.628 real 0m0.364s 00:17:26.628 user 0m0.323s 00:17:26.628 sys 0m0.041s 00:17:26.628 01:47:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.628 ************************************ 00:17:26.628 END TEST env_memory 00:17:26.628 01:47:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.628 ************************************ 00:17:26.628 01:47:26 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:26.628 01:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.628 01:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.628 01:47:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.628 ************************************ 00:17:26.628 START TEST env_vtophys 00:17:26.628 ************************************ 00:17:26.628 01:47:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:26.628 EAL: lib.eal log level changed from notice to debug 00:17:26.628 EAL: Detected lcore 0 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 1 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 2 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 3 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 4 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 5 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 6 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 7 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 8 as core 0 on socket 0 00:17:26.628 EAL: Detected lcore 9 as core 0 on socket 0 00:17:26.887 EAL: Maximum logical cores by configuration: 128 00:17:26.887 EAL: Detected CPU lcores: 10 00:17:26.887 EAL: Detected NUMA nodes: 1 00:17:26.887 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:17:26.887 EAL: Checking presence of .so 'librte_eal.so.24' 00:17:26.887 EAL: Checking presence of .so 'librte_eal.so' 00:17:26.887 EAL: Detected static linkage of DPDK 00:17:26.887 EAL: No shared files mode enabled, IPC will be disabled 00:17:26.887 EAL: Selected IOVA mode 'PA' 00:17:26.887 EAL: Probing VFIO support... 00:17:26.887 EAL: IOMMU type 1 (Type 1) is supported 00:17:26.887 EAL: IOMMU type 7 (sPAPR) is not supported 00:17:26.887 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:17:26.887 EAL: VFIO support initialized 00:17:26.887 EAL: Ask a virtual area of 0x2e000 bytes 00:17:26.887 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:26.887 EAL: Setting up physically contiguous memory... 00:17:26.887 EAL: Setting maximum number of open files to 1048576 00:17:26.887 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:26.887 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:26.887 EAL: Ask a virtual area of 0x61000 bytes 00:17:26.887 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:26.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:26.887 EAL: Ask a virtual area of 0x400000000 bytes 00:17:26.887 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:26.887 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:26.887 EAL: Ask a virtual area of 0x61000 bytes 00:17:26.887 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:26.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:26.887 EAL: Ask a virtual area of 0x400000000 bytes 00:17:26.887 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:26.887 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:26.887 EAL: Ask a virtual area of 0x61000 bytes 00:17:26.887 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:26.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:26.887 EAL: Ask a virtual area of 0x400000000 bytes 00:17:26.887 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:26.887 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:26.887 EAL: Ask a virtual area of 0x61000 bytes 00:17:26.887 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:26.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:26.887 EAL: Ask a virtual area of 0x400000000 bytes 00:17:26.887 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:26.887 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:26.887 EAL: Hugepages will be freed exactly as allocated. 00:17:26.887 EAL: No shared files mode enabled, IPC is disabled 00:17:26.887 EAL: No shared files mode enabled, IPC is disabled 00:17:26.887 EAL: TSC frequency is ~2100000 KHz 00:17:26.887 EAL: Main lcore 0 is ready (tid=7f5e33d40a80;cpuset=[0]) 00:17:26.887 EAL: Trying to obtain current memory policy. 00:17:26.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:26.887 EAL: Restoring previous memory policy: 0 00:17:26.887 EAL: request: mp_malloc_sync 00:17:26.887 EAL: No shared files mode enabled, IPC is disabled 00:17:26.887 EAL: Heap on socket 0 was expanded by 2MB 00:17:26.887 EAL: No shared files mode enabled, IPC is disabled 00:17:26.887 EAL: Mem event callback 'spdk:(nil)' registered 00:17:26.887 00:17:26.887 00:17:26.887 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.887 http://cunit.sourceforge.net/ 00:17:26.887 00:17:26.887 00:17:26.887 Suite: components_suite 00:17:27.453 Test: vtophys_malloc_test ...passed 00:17:27.454 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:27.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.454 EAL: Restoring previous memory policy: 0 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was expanded by 4MB 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was shrunk by 4MB 00:17:27.454 EAL: Trying to obtain current memory policy. 00:17:27.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.454 EAL: Restoring previous memory policy: 0 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was expanded by 6MB 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was shrunk by 6MB 00:17:27.454 EAL: Trying to obtain current memory policy. 00:17:27.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.454 EAL: Restoring previous memory policy: 0 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was expanded by 10MB 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was shrunk by 10MB 00:17:27.454 EAL: Trying to obtain current memory policy. 00:17:27.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.454 EAL: Restoring previous memory policy: 0 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was expanded by 18MB 00:17:27.454 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.454 EAL: request: mp_malloc_sync 00:17:27.454 EAL: No shared files mode enabled, IPC is disabled 00:17:27.454 EAL: Heap on socket 0 was shrunk by 18MB 00:17:27.713 EAL: Trying to obtain current memory policy. 00:17:27.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.713 EAL: Restoring previous memory policy: 0 00:17:27.713 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.713 EAL: request: mp_malloc_sync 00:17:27.713 EAL: No shared files mode enabled, IPC is disabled 00:17:27.713 EAL: Heap on socket 0 was expanded by 34MB 00:17:27.713 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.713 EAL: request: mp_malloc_sync 00:17:27.713 EAL: No shared files mode enabled, IPC is disabled 00:17:27.713 EAL: Heap on socket 0 was shrunk by 34MB 00:17:27.713 EAL: Trying to obtain current memory policy. 00:17:27.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.713 EAL: Restoring previous memory policy: 0 00:17:27.713 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.713 EAL: request: mp_malloc_sync 00:17:27.713 EAL: No shared files mode enabled, IPC is disabled 00:17:27.713 EAL: Heap on socket 0 was expanded by 66MB 00:17:27.972 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.972 EAL: request: mp_malloc_sync 00:17:27.972 EAL: No shared files mode enabled, IPC is disabled 00:17:27.972 EAL: Heap on socket 0 was shrunk by 66MB 00:17:27.972 EAL: Trying to obtain current memory policy. 00:17:27.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:27.972 EAL: Restoring previous memory policy: 0 00:17:27.972 EAL: Calling mem event callback 'spdk:(nil)' 00:17:27.972 EAL: request: mp_malloc_sync 00:17:27.972 EAL: No shared files mode enabled, IPC is disabled 00:17:27.972 EAL: Heap on socket 0 was expanded by 130MB 00:17:28.230 EAL: Calling mem event callback 'spdk:(nil)' 00:17:28.230 EAL: request: mp_malloc_sync 00:17:28.230 EAL: No shared files mode enabled, IPC is disabled 00:17:28.230 EAL: Heap on socket 0 was shrunk by 130MB 00:17:28.489 EAL: Trying to obtain current memory policy. 00:17:28.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:28.489 EAL: Restoring previous memory policy: 0 00:17:28.489 EAL: Calling mem event callback 'spdk:(nil)' 00:17:28.489 EAL: request: mp_malloc_sync 00:17:28.489 EAL: No shared files mode enabled, IPC is disabled 00:17:28.489 EAL: Heap on socket 0 was expanded by 258MB 00:17:29.057 EAL: Calling mem event callback 'spdk:(nil)' 00:17:29.057 EAL: request: mp_malloc_sync 00:17:29.057 EAL: No shared files mode enabled, IPC is disabled 00:17:29.057 EAL: Heap on socket 0 was shrunk by 258MB 00:17:29.623 EAL: Trying to obtain current memory policy. 00:17:29.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:29.623 EAL: Restoring previous memory policy: 0 00:17:29.623 EAL: Calling mem event callback 'spdk:(nil)' 00:17:29.623 EAL: request: mp_malloc_sync 00:17:29.623 EAL: No shared files mode enabled, IPC is disabled 00:17:29.623 EAL: Heap on socket 0 was expanded by 514MB 00:17:30.557 EAL: Calling mem event callback 'spdk:(nil)' 00:17:30.557 EAL: request: mp_malloc_sync 00:17:30.557 EAL: No shared files mode enabled, IPC is disabled 00:17:30.557 EAL: Heap on socket 0 was shrunk by 514MB 00:17:31.493 EAL: Trying to obtain current memory policy. 00:17:31.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:31.810 EAL: Restoring previous memory policy: 0 00:17:31.810 EAL: Calling mem event callback 'spdk:(nil)' 00:17:31.810 EAL: request: mp_malloc_sync 00:17:31.810 EAL: No shared files mode enabled, IPC is disabled 00:17:31.810 EAL: Heap on socket 0 was expanded by 1026MB 00:17:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:17:33.727 EAL: request: mp_malloc_sync 00:17:33.727 EAL: No shared files mode enabled, IPC is disabled 00:17:33.727 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:35.625 passed 00:17:35.625 00:17:35.625 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.625 suites 1 1 n/a 0 0 00:17:35.625 tests 2 2 2 0 0 00:17:35.625 asserts 6363 6363 6363 0 n/a 00:17:35.625 00:17:35.625 Elapsed time = 8.505 seconds 00:17:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:17:35.625 EAL: request: mp_malloc_sync 00:17:35.625 EAL: No shared files mode enabled, IPC is disabled 00:17:35.625 EAL: Heap on socket 0 was shrunk by 2MB 00:17:35.625 EAL: No shared files mode enabled, IPC is disabled 00:17:35.625 EAL: No shared files mode enabled, IPC is disabled 00:17:35.625 EAL: No shared files mode enabled, IPC is disabled 00:17:35.625 00:17:35.625 real 0m8.823s 00:17:35.625 user 0m7.774s 00:17:35.625 sys 0m0.922s 00:17:35.625 01:47:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.625 ************************************ 00:17:35.625 END TEST env_vtophys 00:17:35.625 01:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.625 ************************************ 00:17:35.625 01:47:35 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:35.625 01:47:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:35.625 01:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.625 01:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.625 ************************************ 00:17:35.625 START TEST env_pci 00:17:35.625 ************************************ 00:17:35.625 01:47:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:35.625 00:17:35.625 00:17:35.625 CUnit - A unit testing framework for C - Version 2.1-3 00:17:35.625 http://cunit.sourceforge.net/ 00:17:35.625 00:17:35.625 00:17:35.625 Suite: pci 00:17:35.625 Test: pci_hook ...[2024-04-24 01:47:35.615392] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110395 has claimed it 00:17:35.625 passed 00:17:35.625 00:17:35.625 EAL: Cannot find device (10000:00:01.0) 00:17:35.625 EAL: Failed to attach device on primary process 00:17:35.625 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.625 suites 1 1 n/a 0 0 00:17:35.625 tests 1 1 1 0 0 00:17:35.625 asserts 25 25 25 0 n/a 00:17:35.625 00:17:35.625 Elapsed time = 0.007 seconds 00:17:35.625 00:17:35.625 real 0m0.109s 00:17:35.625 user 0m0.058s 00:17:35.625 sys 0m0.051s 00:17:35.625 ************************************ 00:17:35.625 END TEST env_pci 00:17:35.625 ************************************ 00:17:35.625 01:47:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.625 01:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.897 01:47:35 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:35.897 01:47:35 -- env/env.sh@15 -- # uname 00:17:35.897 01:47:35 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:35.897 01:47:35 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:35.897 01:47:35 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:35.897 01:47:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:35.897 01:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.897 01:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.897 ************************************ 00:17:35.897 START TEST env_dpdk_post_init 00:17:35.897 ************************************ 00:17:35.897 01:47:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:35.897 EAL: Detected CPU lcores: 10 00:17:35.897 EAL: Detected NUMA nodes: 1 00:17:35.897 EAL: Detected static linkage of DPDK 00:17:35.897 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:35.898 EAL: Selected IOVA mode 'PA' 00:17:35.898 EAL: VFIO support initialized 00:17:36.159 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:36.159 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:36.159 Starting DPDK initialization... 00:17:36.159 Starting SPDK post initialization... 00:17:36.159 SPDK NVMe probe 00:17:36.159 Attaching to 0000:00:10.0 00:17:36.159 Attached to 0000:00:10.0 00:17:36.159 Cleaning up... 00:17:36.159 00:17:36.159 real 0m0.326s 00:17:36.159 user 0m0.127s 00:17:36.159 sys 0m0.103s 00:17:36.159 01:47:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.159 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.159 ************************************ 00:17:36.159 END TEST env_dpdk_post_init 00:17:36.159 ************************************ 00:17:36.159 01:47:36 -- env/env.sh@26 -- # uname 00:17:36.159 01:47:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:36.159 01:47:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:36.159 01:47:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:36.159 01:47:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.159 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.159 ************************************ 00:17:36.159 START TEST env_mem_callbacks 00:17:36.159 ************************************ 00:17:36.159 01:47:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:36.457 EAL: Detected CPU lcores: 10 00:17:36.457 EAL: Detected NUMA nodes: 1 00:17:36.457 EAL: Detected static linkage of DPDK 00:17:36.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:36.457 EAL: Selected IOVA mode 'PA' 00:17:36.457 EAL: VFIO support initialized 00:17:36.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:36.457 00:17:36.457 00:17:36.457 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.457 http://cunit.sourceforge.net/ 00:17:36.457 00:17:36.457 00:17:36.457 Suite: memory 00:17:36.457 Test: test ... 00:17:36.457 register 0x200000200000 2097152 00:17:36.457 malloc 3145728 00:17:36.457 register 0x200000400000 4194304 00:17:36.457 buf 0x2000004fffc0 len 3145728 PASSED 00:17:36.457 malloc 64 00:17:36.457 buf 0x2000004ffec0 len 64 PASSED 00:17:36.457 malloc 4194304 00:17:36.457 register 0x200000800000 6291456 00:17:36.457 buf 0x2000009fffc0 len 4194304 PASSED 00:17:36.457 free 0x2000004fffc0 3145728 00:17:36.457 free 0x2000004ffec0 64 00:17:36.457 unregister 0x200000400000 4194304 PASSED 00:17:36.457 free 0x2000009fffc0 4194304 00:17:36.457 unregister 0x200000800000 6291456 PASSED 00:17:36.457 malloc 8388608 00:17:36.457 register 0x200000400000 10485760 00:17:36.457 buf 0x2000005fffc0 len 8388608 PASSED 00:17:36.457 free 0x2000005fffc0 8388608 00:17:36.457 unregister 0x200000400000 10485760 PASSED 00:17:36.457 passed 00:17:36.457 00:17:36.457 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.457 suites 1 1 n/a 0 0 00:17:36.457 tests 1 1 1 0 0 00:17:36.457 asserts 15 15 15 0 n/a 00:17:36.457 00:17:36.457 Elapsed time = 0.083 seconds 00:17:36.457 00:17:36.457 real 0m0.315s 00:17:36.457 user 0m0.143s 00:17:36.457 sys 0m0.073s 00:17:36.457 01:47:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.457 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.457 ************************************ 00:17:36.457 END TEST env_mem_callbacks 00:17:36.457 ************************************ 00:17:36.715 00:17:36.715 real 0m10.533s 00:17:36.715 user 0m8.696s 00:17:36.715 sys 0m1.523s 00:17:36.715 01:47:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.715 ************************************ 00:17:36.715 END TEST env 00:17:36.715 ************************************ 00:17:36.715 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.715 01:47:36 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:36.715 01:47:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:36.715 01:47:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.715 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.715 ************************************ 00:17:36.715 START TEST rpc 00:17:36.715 ************************************ 00:17:36.715 01:47:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:36.715 * Looking for test storage... 00:17:36.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:36.715 01:47:36 -- rpc/rpc.sh@65 -- # spdk_pid=110546 00:17:36.715 01:47:36 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:36.715 01:47:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:36.715 01:47:36 -- rpc/rpc.sh@67 -- # waitforlisten 110546 00:17:36.715 01:47:36 -- common/autotest_common.sh@817 -- # '[' -z 110546 ']' 00:17:36.715 01:47:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.715 01:47:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.715 01:47:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.715 01:47:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.715 01:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.972 [2024-04-24 01:47:36.892529] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:17:36.972 [2024-04-24 01:47:36.892748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110546 ] 00:17:37.228 [2024-04-24 01:47:37.103754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.487 [2024-04-24 01:47:37.363710] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:37.487 [2024-04-24 01:47:37.363811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110546' to capture a snapshot of events at runtime. 00:17:37.487 [2024-04-24 01:47:37.363840] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.487 [2024-04-24 01:47:37.363862] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.487 [2024-04-24 01:47:37.363908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110546 for offline analysis/debug. 00:17:37.487 [2024-04-24 01:47:37.363969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.422 01:47:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.422 01:47:38 -- common/autotest_common.sh@850 -- # return 0 00:17:38.422 01:47:38 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:38.422 01:47:38 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:38.422 01:47:38 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:38.422 01:47:38 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:38.422 01:47:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:38.422 01:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 ************************************ 00:17:38.422 START TEST rpc_integrity 00:17:38.422 ************************************ 00:17:38.422 01:47:38 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:17:38.422 01:47:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.422 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.422 01:47:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:38.422 01:47:38 -- rpc/rpc.sh@13 -- # jq length 00:17:38.422 01:47:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:38.422 01:47:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:38.422 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.422 01:47:38 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:38.422 01:47:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:38.422 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.422 01:47:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:38.422 { 00:17:38.422 "name": "Malloc0", 00:17:38.422 "aliases": [ 00:17:38.422 "018c3de6-6e0e-4b96-bd85-1dedd5dc1d35" 00:17:38.422 ], 00:17:38.422 "product_name": "Malloc disk", 00:17:38.422 "block_size": 512, 00:17:38.422 "num_blocks": 16384, 00:17:38.422 "uuid": "018c3de6-6e0e-4b96-bd85-1dedd5dc1d35", 00:17:38.422 "assigned_rate_limits": { 00:17:38.422 "rw_ios_per_sec": 0, 00:17:38.422 "rw_mbytes_per_sec": 0, 00:17:38.422 "r_mbytes_per_sec": 0, 00:17:38.422 "w_mbytes_per_sec": 0 00:17:38.422 }, 00:17:38.422 "claimed": false, 00:17:38.422 "zoned": false, 00:17:38.422 "supported_io_types": { 00:17:38.422 "read": true, 00:17:38.422 "write": true, 00:17:38.422 "unmap": true, 00:17:38.422 "write_zeroes": true, 00:17:38.422 "flush": true, 00:17:38.422 "reset": true, 00:17:38.422 "compare": false, 00:17:38.422 "compare_and_write": false, 00:17:38.422 "abort": true, 00:17:38.422 "nvme_admin": false, 00:17:38.422 "nvme_io": false 00:17:38.422 }, 00:17:38.422 "memory_domains": [ 00:17:38.422 { 00:17:38.422 "dma_device_id": "system", 00:17:38.422 "dma_device_type": 1 00:17:38.422 }, 00:17:38.422 { 00:17:38.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.422 "dma_device_type": 2 00:17:38.422 } 00:17:38.422 ], 00:17:38.422 "driver_specific": {} 00:17:38.422 } 00:17:38.422 ]' 00:17:38.422 01:47:38 -- rpc/rpc.sh@17 -- # jq length 00:17:38.422 01:47:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:38.422 01:47:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:38.422 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 [2024-04-24 01:47:38.408966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:38.422 [2024-04-24 01:47:38.409079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.422 [2024-04-24 01:47:38.409134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:38.422 [2024-04-24 01:47:38.409164] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.422 [2024-04-24 01:47:38.411812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.422 [2024-04-24 01:47:38.411872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:38.422 Passthru0 00:17:38.422 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.422 01:47:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:38.422 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.422 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.422 01:47:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:38.422 { 00:17:38.422 "name": "Malloc0", 00:17:38.422 "aliases": [ 00:17:38.422 "018c3de6-6e0e-4b96-bd85-1dedd5dc1d35" 00:17:38.422 ], 00:17:38.422 "product_name": "Malloc disk", 00:17:38.422 "block_size": 512, 00:17:38.422 "num_blocks": 16384, 00:17:38.422 "uuid": "018c3de6-6e0e-4b96-bd85-1dedd5dc1d35", 00:17:38.422 "assigned_rate_limits": { 00:17:38.422 "rw_ios_per_sec": 0, 00:17:38.422 "rw_mbytes_per_sec": 0, 00:17:38.422 "r_mbytes_per_sec": 0, 00:17:38.422 "w_mbytes_per_sec": 0 00:17:38.422 }, 00:17:38.422 "claimed": true, 00:17:38.422 "claim_type": "exclusive_write", 00:17:38.422 "zoned": false, 00:17:38.422 "supported_io_types": { 00:17:38.422 "read": true, 00:17:38.422 "write": true, 00:17:38.422 "unmap": true, 00:17:38.422 "write_zeroes": true, 00:17:38.422 "flush": true, 00:17:38.422 "reset": true, 00:17:38.422 "compare": false, 00:17:38.422 "compare_and_write": false, 00:17:38.422 "abort": true, 00:17:38.422 "nvme_admin": false, 00:17:38.422 "nvme_io": false 00:17:38.422 }, 00:17:38.422 "memory_domains": [ 00:17:38.422 { 00:17:38.422 "dma_device_id": "system", 00:17:38.422 "dma_device_type": 1 00:17:38.422 }, 00:17:38.422 { 00:17:38.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 } 00:17:38.423 ], 00:17:38.423 "driver_specific": {} 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "name": "Passthru0", 00:17:38.423 "aliases": [ 00:17:38.423 "8f3621c8-7cfe-5011-a5f5-82c6223c1aa4" 00:17:38.423 ], 00:17:38.423 "product_name": "passthru", 00:17:38.423 "block_size": 512, 00:17:38.423 "num_blocks": 16384, 00:17:38.423 "uuid": "8f3621c8-7cfe-5011-a5f5-82c6223c1aa4", 00:17:38.423 "assigned_rate_limits": { 00:17:38.423 "rw_ios_per_sec": 0, 00:17:38.423 "rw_mbytes_per_sec": 0, 00:17:38.423 "r_mbytes_per_sec": 0, 00:17:38.423 "w_mbytes_per_sec": 0 00:17:38.423 }, 00:17:38.423 "claimed": false, 00:17:38.423 "zoned": false, 00:17:38.423 "supported_io_types": { 00:17:38.423 "read": true, 00:17:38.423 "write": true, 00:17:38.423 "unmap": true, 00:17:38.423 "write_zeroes": true, 00:17:38.423 "flush": true, 00:17:38.423 "reset": true, 00:17:38.423 "compare": false, 00:17:38.423 "compare_and_write": false, 00:17:38.423 "abort": true, 00:17:38.423 "nvme_admin": false, 00:17:38.423 "nvme_io": false 00:17:38.423 }, 00:17:38.423 "memory_domains": [ 00:17:38.423 { 00:17:38.423 "dma_device_id": "system", 00:17:38.423 "dma_device_type": 1 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 } 00:17:38.423 ], 00:17:38.423 "driver_specific": { 00:17:38.423 "passthru": { 00:17:38.423 "name": "Passthru0", 00:17:38.423 "base_bdev_name": "Malloc0" 00:17:38.423 } 00:17:38.423 } 00:17:38.423 } 00:17:38.423 ]' 00:17:38.423 01:47:38 -- rpc/rpc.sh@21 -- # jq length 00:17:38.423 01:47:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:38.423 01:47:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:38.423 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.423 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.423 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.423 01:47:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:38.423 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.423 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.681 01:47:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:38.681 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.681 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.681 01:47:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:38.681 01:47:38 -- rpc/rpc.sh@26 -- # jq length 00:17:38.681 01:47:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:38.681 00:17:38.681 real 0m0.314s 00:17:38.681 user 0m0.182s 00:17:38.681 sys 0m0.031s 00:17:38.681 01:47:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:38.681 ************************************ 00:17:38.681 END TEST rpc_integrity 00:17:38.681 ************************************ 00:17:38.681 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 01:47:38 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:38.681 01:47:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:38.681 01:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.681 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 ************************************ 00:17:38.681 START TEST rpc_plugins 00:17:38.681 ************************************ 00:17:38.681 01:47:38 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:17:38.681 01:47:38 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:38.681 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.681 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.681 01:47:38 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:38.681 01:47:38 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:38.681 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.681 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.681 01:47:38 -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:38.681 { 00:17:38.681 "name": "Malloc1", 00:17:38.681 "aliases": [ 00:17:38.681 "6731b15a-40b9-491b-ba79-9d7dc69f2f65" 00:17:38.681 ], 00:17:38.681 "product_name": "Malloc disk", 00:17:38.681 "block_size": 4096, 00:17:38.681 "num_blocks": 256, 00:17:38.681 "uuid": "6731b15a-40b9-491b-ba79-9d7dc69f2f65", 00:17:38.681 "assigned_rate_limits": { 00:17:38.681 "rw_ios_per_sec": 0, 00:17:38.681 "rw_mbytes_per_sec": 0, 00:17:38.681 "r_mbytes_per_sec": 0, 00:17:38.681 "w_mbytes_per_sec": 0 00:17:38.681 }, 00:17:38.681 "claimed": false, 00:17:38.681 "zoned": false, 00:17:38.681 "supported_io_types": { 00:17:38.681 "read": true, 00:17:38.681 "write": true, 00:17:38.681 "unmap": true, 00:17:38.681 "write_zeroes": true, 00:17:38.681 "flush": true, 00:17:38.681 "reset": true, 00:17:38.681 "compare": false, 00:17:38.681 "compare_and_write": false, 00:17:38.681 "abort": true, 00:17:38.681 "nvme_admin": false, 00:17:38.681 "nvme_io": false 00:17:38.681 }, 00:17:38.681 "memory_domains": [ 00:17:38.681 { 00:17:38.681 "dma_device_id": "system", 00:17:38.681 "dma_device_type": 1 00:17:38.681 }, 00:17:38.681 { 00:17:38.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.681 "dma_device_type": 2 00:17:38.681 } 00:17:38.681 ], 00:17:38.681 "driver_specific": {} 00:17:38.681 } 00:17:38.681 ]' 00:17:38.681 01:47:38 -- rpc/rpc.sh@32 -- # jq length 00:17:38.939 01:47:38 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:38.939 01:47:38 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:38.939 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.939 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.939 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.939 01:47:38 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:38.939 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.939 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.939 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.939 01:47:38 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:38.939 01:47:38 -- rpc/rpc.sh@36 -- # jq length 00:17:38.939 01:47:38 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:38.939 00:17:38.939 real 0m0.178s 00:17:38.940 user 0m0.106s 00:17:38.940 sys 0m0.012s 00:17:38.940 01:47:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:38.940 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 ************************************ 00:17:38.940 END TEST rpc_plugins 00:17:38.940 ************************************ 00:17:38.940 01:47:38 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:38.940 01:47:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:38.940 01:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.940 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 ************************************ 00:17:38.940 START TEST rpc_trace_cmd_test 00:17:38.940 ************************************ 00:17:38.940 01:47:38 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:17:38.940 01:47:38 -- rpc/rpc.sh@40 -- # local info 00:17:38.940 01:47:38 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:38.940 01:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.940 01:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 01:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.940 01:47:38 -- rpc/rpc.sh@42 -- # info='{ 00:17:38.940 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110546", 00:17:38.940 "tpoint_group_mask": "0x8", 00:17:38.940 "iscsi_conn": { 00:17:38.940 "mask": "0x2", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "scsi": { 00:17:38.940 "mask": "0x4", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "bdev": { 00:17:38.940 "mask": "0x8", 00:17:38.940 "tpoint_mask": "0xffffffffffffffff" 00:17:38.940 }, 00:17:38.940 "nvmf_rdma": { 00:17:38.940 "mask": "0x10", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "nvmf_tcp": { 00:17:38.940 "mask": "0x20", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "ftl": { 00:17:38.940 "mask": "0x40", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "blobfs": { 00:17:38.940 "mask": "0x80", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "dsa": { 00:17:38.940 "mask": "0x200", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "thread": { 00:17:38.940 "mask": "0x400", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "nvme_pcie": { 00:17:38.940 "mask": "0x800", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "iaa": { 00:17:38.940 "mask": "0x1000", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "nvme_tcp": { 00:17:38.940 "mask": "0x2000", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "bdev_nvme": { 00:17:38.940 "mask": "0x4000", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 }, 00:17:38.940 "sock": { 00:17:38.940 "mask": "0x8000", 00:17:38.940 "tpoint_mask": "0x0" 00:17:38.940 } 00:17:38.940 }' 00:17:38.940 01:47:38 -- rpc/rpc.sh@43 -- # jq length 00:17:39.198 01:47:39 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:17:39.198 01:47:39 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:39.198 01:47:39 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:39.198 01:47:39 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:39.198 01:47:39 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:39.198 01:47:39 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:39.198 01:47:39 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:39.198 01:47:39 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:39.198 01:47:39 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:39.198 00:17:39.198 real 0m0.238s 00:17:39.198 user 0m0.197s 00:17:39.198 sys 0m0.037s 00:17:39.198 01:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:39.198 ************************************ 00:17:39.198 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.198 END TEST rpc_trace_cmd_test 00:17:39.198 ************************************ 00:17:39.198 01:47:39 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:39.198 01:47:39 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:39.198 01:47:39 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:39.198 01:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:39.198 01:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.198 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 ************************************ 00:17:39.459 START TEST rpc_daemon_integrity 00:17:39.459 ************************************ 00:17:39.459 01:47:39 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:17:39.459 01:47:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:39.459 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.459 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.459 01:47:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:39.459 01:47:39 -- rpc/rpc.sh@13 -- # jq length 00:17:39.459 01:47:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:39.459 01:47:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:39.459 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.459 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.459 01:47:39 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:39.459 01:47:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:39.459 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.459 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.459 01:47:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:39.459 { 00:17:39.459 "name": "Malloc2", 00:17:39.459 "aliases": [ 00:17:39.459 "48d29e3f-5557-4c94-8603-09704305d40c" 00:17:39.459 ], 00:17:39.459 "product_name": "Malloc disk", 00:17:39.459 "block_size": 512, 00:17:39.459 "num_blocks": 16384, 00:17:39.459 "uuid": "48d29e3f-5557-4c94-8603-09704305d40c", 00:17:39.459 "assigned_rate_limits": { 00:17:39.459 "rw_ios_per_sec": 0, 00:17:39.459 "rw_mbytes_per_sec": 0, 00:17:39.459 "r_mbytes_per_sec": 0, 00:17:39.459 "w_mbytes_per_sec": 0 00:17:39.459 }, 00:17:39.459 "claimed": false, 00:17:39.459 "zoned": false, 00:17:39.459 "supported_io_types": { 00:17:39.459 "read": true, 00:17:39.459 "write": true, 00:17:39.459 "unmap": true, 00:17:39.459 "write_zeroes": true, 00:17:39.459 "flush": true, 00:17:39.459 "reset": true, 00:17:39.459 "compare": false, 00:17:39.459 "compare_and_write": false, 00:17:39.459 "abort": true, 00:17:39.459 "nvme_admin": false, 00:17:39.459 "nvme_io": false 00:17:39.459 }, 00:17:39.459 "memory_domains": [ 00:17:39.459 { 00:17:39.459 "dma_device_id": "system", 00:17:39.459 "dma_device_type": 1 00:17:39.459 }, 00:17:39.459 { 00:17:39.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.459 "dma_device_type": 2 00:17:39.459 } 00:17:39.459 ], 00:17:39.459 "driver_specific": {} 00:17:39.459 } 00:17:39.459 ]' 00:17:39.459 01:47:39 -- rpc/rpc.sh@17 -- # jq length 00:17:39.459 01:47:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:39.459 01:47:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:39.459 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.459 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 [2024-04-24 01:47:39.454279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:39.459 [2024-04-24 01:47:39.454401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.459 [2024-04-24 01:47:39.454439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:39.459 [2024-04-24 01:47:39.454468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.459 [2024-04-24 01:47:39.457003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.459 [2024-04-24 01:47:39.457059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:39.459 Passthru0 00:17:39.459 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.459 01:47:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:39.459 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.459 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.459 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.459 01:47:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:39.459 { 00:17:39.459 "name": "Malloc2", 00:17:39.459 "aliases": [ 00:17:39.459 "48d29e3f-5557-4c94-8603-09704305d40c" 00:17:39.459 ], 00:17:39.459 "product_name": "Malloc disk", 00:17:39.459 "block_size": 512, 00:17:39.459 "num_blocks": 16384, 00:17:39.459 "uuid": "48d29e3f-5557-4c94-8603-09704305d40c", 00:17:39.459 "assigned_rate_limits": { 00:17:39.459 "rw_ios_per_sec": 0, 00:17:39.459 "rw_mbytes_per_sec": 0, 00:17:39.459 "r_mbytes_per_sec": 0, 00:17:39.459 "w_mbytes_per_sec": 0 00:17:39.459 }, 00:17:39.459 "claimed": true, 00:17:39.459 "claim_type": "exclusive_write", 00:17:39.459 "zoned": false, 00:17:39.459 "supported_io_types": { 00:17:39.459 "read": true, 00:17:39.460 "write": true, 00:17:39.460 "unmap": true, 00:17:39.460 "write_zeroes": true, 00:17:39.460 "flush": true, 00:17:39.460 "reset": true, 00:17:39.460 "compare": false, 00:17:39.460 "compare_and_write": false, 00:17:39.460 "abort": true, 00:17:39.460 "nvme_admin": false, 00:17:39.460 "nvme_io": false 00:17:39.460 }, 00:17:39.460 "memory_domains": [ 00:17:39.460 { 00:17:39.460 "dma_device_id": "system", 00:17:39.460 "dma_device_type": 1 00:17:39.460 }, 00:17:39.460 { 00:17:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.460 "dma_device_type": 2 00:17:39.460 } 00:17:39.460 ], 00:17:39.460 "driver_specific": {} 00:17:39.460 }, 00:17:39.460 { 00:17:39.460 "name": "Passthru0", 00:17:39.460 "aliases": [ 00:17:39.460 "566d3f3f-775d-5986-aec5-89cde58841ed" 00:17:39.460 ], 00:17:39.460 "product_name": "passthru", 00:17:39.460 "block_size": 512, 00:17:39.460 "num_blocks": 16384, 00:17:39.460 "uuid": "566d3f3f-775d-5986-aec5-89cde58841ed", 00:17:39.460 "assigned_rate_limits": { 00:17:39.460 "rw_ios_per_sec": 0, 00:17:39.460 "rw_mbytes_per_sec": 0, 00:17:39.460 "r_mbytes_per_sec": 0, 00:17:39.460 "w_mbytes_per_sec": 0 00:17:39.460 }, 00:17:39.460 "claimed": false, 00:17:39.460 "zoned": false, 00:17:39.460 "supported_io_types": { 00:17:39.460 "read": true, 00:17:39.460 "write": true, 00:17:39.460 "unmap": true, 00:17:39.460 "write_zeroes": true, 00:17:39.460 "flush": true, 00:17:39.460 "reset": true, 00:17:39.460 "compare": false, 00:17:39.460 "compare_and_write": false, 00:17:39.460 "abort": true, 00:17:39.460 "nvme_admin": false, 00:17:39.460 "nvme_io": false 00:17:39.460 }, 00:17:39.460 "memory_domains": [ 00:17:39.460 { 00:17:39.460 "dma_device_id": "system", 00:17:39.460 "dma_device_type": 1 00:17:39.460 }, 00:17:39.460 { 00:17:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.460 "dma_device_type": 2 00:17:39.460 } 00:17:39.460 ], 00:17:39.460 "driver_specific": { 00:17:39.460 "passthru": { 00:17:39.460 "name": "Passthru0", 00:17:39.460 "base_bdev_name": "Malloc2" 00:17:39.460 } 00:17:39.460 } 00:17:39.460 } 00:17:39.460 ]' 00:17:39.460 01:47:39 -- rpc/rpc.sh@21 -- # jq length 00:17:39.460 01:47:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:39.460 01:47:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:39.460 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.460 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.460 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.460 01:47:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:39.460 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.460 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.717 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.717 01:47:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:39.717 01:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.717 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.717 01:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.717 01:47:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:39.717 01:47:39 -- rpc/rpc.sh@26 -- # jq length 00:17:39.717 01:47:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:39.717 00:17:39.717 real 0m0.298s 00:17:39.717 user 0m0.158s 00:17:39.717 sys 0m0.043s 00:17:39.717 01:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:39.717 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:39.717 ************************************ 00:17:39.717 END TEST rpc_daemon_integrity 00:17:39.717 ************************************ 00:17:39.717 01:47:39 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:39.717 01:47:39 -- rpc/rpc.sh@84 -- # killprocess 110546 00:17:39.717 01:47:39 -- common/autotest_common.sh@936 -- # '[' -z 110546 ']' 00:17:39.717 01:47:39 -- common/autotest_common.sh@940 -- # kill -0 110546 00:17:39.717 01:47:39 -- common/autotest_common.sh@941 -- # uname 00:17:39.717 01:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.717 01:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110546 00:17:39.717 01:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:39.717 killing process with pid 110546 00:17:39.717 01:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:39.717 01:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110546' 00:17:39.717 01:47:39 -- common/autotest_common.sh@955 -- # kill 110546 00:17:39.717 01:47:39 -- common/autotest_common.sh@960 -- # wait 110546 00:17:43.000 00:17:43.000 real 0m5.707s 00:17:43.000 user 0m6.402s 00:17:43.000 sys 0m0.926s 00:17:43.000 01:47:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.000 01:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.000 ************************************ 00:17:43.000 END TEST rpc 00:17:43.000 ************************************ 00:17:43.000 01:47:42 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:43.000 01:47:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:43.000 01:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.000 01:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.000 ************************************ 00:17:43.000 START TEST skip_rpc 00:17:43.000 ************************************ 00:17:43.000 01:47:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:43.000 * Looking for test storage... 00:17:43.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:43.000 01:47:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:43.000 01:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.000 01:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.000 ************************************ 00:17:43.000 START TEST skip_rpc 00:17:43.000 ************************************ 00:17:43.000 01:47:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=110828 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:43.000 01:47:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:43.000 [2024-04-24 01:47:42.725040] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:17:43.000 [2024-04-24 01:47:42.725249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110828 ] 00:17:43.000 [2024-04-24 01:47:42.905823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.307 [2024-04-24 01:47:43.128342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.575 01:47:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:48.575 01:47:47 -- common/autotest_common.sh@638 -- # local es=0 00:17:48.575 01:47:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:48.575 01:47:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:48.575 01:47:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:48.575 01:47:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:48.575 01:47:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:48.575 01:47:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:17:48.575 01:47:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.575 01:47:47 -- common/autotest_common.sh@10 -- # set +x 00:17:48.575 01:47:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:48.575 01:47:47 -- common/autotest_common.sh@641 -- # es=1 00:17:48.575 01:47:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:48.575 01:47:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:48.575 01:47:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:48.575 01:47:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:48.575 01:47:47 -- rpc/skip_rpc.sh@23 -- # killprocess 110828 00:17:48.575 01:47:47 -- common/autotest_common.sh@936 -- # '[' -z 110828 ']' 00:17:48.575 01:47:47 -- common/autotest_common.sh@940 -- # kill -0 110828 00:17:48.575 01:47:47 -- common/autotest_common.sh@941 -- # uname 00:17:48.575 01:47:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.575 01:47:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110828 00:17:48.575 01:47:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:48.575 01:47:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:48.575 killing process with pid 110828 00:17:48.575 01:47:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110828' 00:17:48.575 01:47:47 -- common/autotest_common.sh@955 -- # kill 110828 00:17:48.575 01:47:47 -- common/autotest_common.sh@960 -- # wait 110828 00:17:50.478 00:17:50.478 real 0m7.746s 00:17:50.478 user 0m7.269s 00:17:50.478 sys 0m0.398s 00:17:50.478 01:47:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:50.478 ************************************ 00:17:50.478 END TEST skip_rpc 00:17:50.478 ************************************ 00:17:50.478 01:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:50.478 01:47:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:50.478 01:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.478 01:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:50.478 ************************************ 00:17:50.478 START TEST skip_rpc_with_json 00:17:50.478 ************************************ 00:17:50.478 01:47:50 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=110958 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:50.478 01:47:50 -- rpc/skip_rpc.sh@31 -- # waitforlisten 110958 00:17:50.478 01:47:50 -- common/autotest_common.sh@817 -- # '[' -z 110958 ']' 00:17:50.478 01:47:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.478 01:47:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.478 01:47:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.478 01:47:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.478 01:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:50.478 [2024-04-24 01:47:50.559197] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:17:50.478 [2024-04-24 01:47:50.559424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110958 ] 00:17:50.737 [2024-04-24 01:47:50.741605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.027 [2024-04-24 01:47:50.959868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.964 01:47:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:51.964 01:47:51 -- common/autotest_common.sh@850 -- # return 0 00:17:51.964 01:47:51 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:51.964 01:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.964 01:47:51 -- common/autotest_common.sh@10 -- # set +x 00:17:51.964 [2024-04-24 01:47:51.810874] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:51.964 request: 00:17:51.964 { 00:17:51.964 "trtype": "tcp", 00:17:51.964 "method": "nvmf_get_transports", 00:17:51.964 "req_id": 1 00:17:51.964 } 00:17:51.964 Got JSON-RPC error response 00:17:51.964 response: 00:17:51.964 { 00:17:51.964 "code": -19, 00:17:51.964 "message": "No such device" 00:17:51.964 } 00:17:51.964 01:47:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:51.964 01:47:51 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:51.964 01:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.964 01:47:51 -- common/autotest_common.sh@10 -- # set +x 00:17:51.964 [2024-04-24 01:47:51.818977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.964 01:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.964 01:47:51 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:51.964 01:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.964 01:47:51 -- common/autotest_common.sh@10 -- # set +x 00:17:51.964 01:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.964 01:47:51 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:51.964 { 00:17:51.964 "subsystems": [ 00:17:51.964 { 00:17:51.964 "subsystem": "scheduler", 00:17:51.964 "config": [ 00:17:51.964 { 00:17:51.964 "method": "framework_set_scheduler", 00:17:51.964 "params": { 00:17:51.964 "name": "static" 00:17:51.964 } 00:17:51.964 } 00:17:51.964 ] 00:17:51.964 }, 00:17:51.964 { 00:17:51.964 "subsystem": "vmd", 00:17:51.964 "config": [] 00:17:51.964 }, 00:17:51.964 { 00:17:51.964 "subsystem": "sock", 00:17:51.964 "config": [ 00:17:51.964 { 00:17:51.964 "method": "sock_impl_set_options", 00:17:51.964 "params": { 00:17:51.965 "impl_name": "posix", 00:17:51.965 "recv_buf_size": 2097152, 00:17:51.965 "send_buf_size": 2097152, 00:17:51.965 "enable_recv_pipe": true, 00:17:51.965 "enable_quickack": false, 00:17:51.965 "enable_placement_id": 0, 00:17:51.965 "enable_zerocopy_send_server": true, 00:17:51.965 "enable_zerocopy_send_client": false, 00:17:51.965 "zerocopy_threshold": 0, 00:17:51.965 "tls_version": 0, 00:17:51.965 "enable_ktls": false 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "sock_impl_set_options", 00:17:51.965 "params": { 00:17:51.965 "impl_name": "ssl", 00:17:51.965 "recv_buf_size": 4096, 00:17:51.965 "send_buf_size": 4096, 00:17:51.965 "enable_recv_pipe": true, 00:17:51.965 "enable_quickack": false, 00:17:51.965 "enable_placement_id": 0, 00:17:51.965 "enable_zerocopy_send_server": true, 00:17:51.965 "enable_zerocopy_send_client": false, 00:17:51.965 "zerocopy_threshold": 0, 00:17:51.965 "tls_version": 0, 00:17:51.965 "enable_ktls": false 00:17:51.965 } 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "iobuf", 00:17:51.965 "config": [ 00:17:51.965 { 00:17:51.965 "method": "iobuf_set_options", 00:17:51.965 "params": { 00:17:51.965 "small_pool_count": 8192, 00:17:51.965 "large_pool_count": 1024, 00:17:51.965 "small_bufsize": 8192, 00:17:51.965 "large_bufsize": 135168 00:17:51.965 } 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "keyring", 00:17:51.965 "config": [] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "accel", 00:17:51.965 "config": [ 00:17:51.965 { 00:17:51.965 "method": "accel_set_options", 00:17:51.965 "params": { 00:17:51.965 "small_cache_size": 128, 00:17:51.965 "large_cache_size": 16, 00:17:51.965 "task_count": 2048, 00:17:51.965 "sequence_count": 2048, 00:17:51.965 "buf_count": 2048 00:17:51.965 } 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "bdev", 00:17:51.965 "config": [ 00:17:51.965 { 00:17:51.965 "method": "bdev_set_options", 00:17:51.965 "params": { 00:17:51.965 "bdev_io_pool_size": 65535, 00:17:51.965 "bdev_io_cache_size": 256, 00:17:51.965 "bdev_auto_examine": true, 00:17:51.965 "iobuf_small_cache_size": 128, 00:17:51.965 "iobuf_large_cache_size": 16 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "bdev_raid_set_options", 00:17:51.965 "params": { 00:17:51.965 "process_window_size_kb": 1024 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "bdev_nvme_set_options", 00:17:51.965 "params": { 00:17:51.965 "action_on_timeout": "none", 00:17:51.965 "timeout_us": 0, 00:17:51.965 "timeout_admin_us": 0, 00:17:51.965 "keep_alive_timeout_ms": 10000, 00:17:51.965 "arbitration_burst": 0, 00:17:51.965 "low_priority_weight": 0, 00:17:51.965 "medium_priority_weight": 0, 00:17:51.965 "high_priority_weight": 0, 00:17:51.965 "nvme_adminq_poll_period_us": 10000, 00:17:51.965 "nvme_ioq_poll_period_us": 0, 00:17:51.965 "io_queue_requests": 0, 00:17:51.965 "delay_cmd_submit": true, 00:17:51.965 "transport_retry_count": 4, 00:17:51.965 "bdev_retry_count": 3, 00:17:51.965 "transport_ack_timeout": 0, 00:17:51.965 "ctrlr_loss_timeout_sec": 0, 00:17:51.965 "reconnect_delay_sec": 0, 00:17:51.965 "fast_io_fail_timeout_sec": 0, 00:17:51.965 "disable_auto_failback": false, 00:17:51.965 "generate_uuids": false, 00:17:51.965 "transport_tos": 0, 00:17:51.965 "nvme_error_stat": false, 00:17:51.965 "rdma_srq_size": 0, 00:17:51.965 "io_path_stat": false, 00:17:51.965 "allow_accel_sequence": false, 00:17:51.965 "rdma_max_cq_size": 0, 00:17:51.965 "rdma_cm_event_timeout_ms": 0, 00:17:51.965 "dhchap_digests": [ 00:17:51.965 "sha256", 00:17:51.965 "sha384", 00:17:51.965 "sha512" 00:17:51.965 ], 00:17:51.965 "dhchap_dhgroups": [ 00:17:51.965 "null", 00:17:51.965 "ffdhe2048", 00:17:51.965 "ffdhe3072", 00:17:51.965 "ffdhe4096", 00:17:51.965 "ffdhe6144", 00:17:51.965 "ffdhe8192" 00:17:51.965 ] 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "bdev_nvme_set_hotplug", 00:17:51.965 "params": { 00:17:51.965 "period_us": 100000, 00:17:51.965 "enable": false 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "bdev_iscsi_set_options", 00:17:51.965 "params": { 00:17:51.965 "timeout_sec": 30 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "bdev_wait_for_examine" 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "nvmf", 00:17:51.965 "config": [ 00:17:51.965 { 00:17:51.965 "method": "nvmf_set_config", 00:17:51.965 "params": { 00:17:51.965 "discovery_filter": "match_any", 00:17:51.965 "admin_cmd_passthru": { 00:17:51.965 "identify_ctrlr": false 00:17:51.965 } 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "nvmf_set_max_subsystems", 00:17:51.965 "params": { 00:17:51.965 "max_subsystems": 1024 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "nvmf_set_crdt", 00:17:51.965 "params": { 00:17:51.965 "crdt1": 0, 00:17:51.965 "crdt2": 0, 00:17:51.965 "crdt3": 0 00:17:51.965 } 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "method": "nvmf_create_transport", 00:17:51.965 "params": { 00:17:51.965 "trtype": "TCP", 00:17:51.965 "max_queue_depth": 128, 00:17:51.965 "max_io_qpairs_per_ctrlr": 127, 00:17:51.965 "in_capsule_data_size": 4096, 00:17:51.965 "max_io_size": 131072, 00:17:51.965 "io_unit_size": 131072, 00:17:51.965 "max_aq_depth": 128, 00:17:51.965 "num_shared_buffers": 511, 00:17:51.965 "buf_cache_size": 4294967295, 00:17:51.965 "dif_insert_or_strip": false, 00:17:51.965 "zcopy": false, 00:17:51.965 "c2h_success": true, 00:17:51.965 "sock_priority": 0, 00:17:51.965 "abort_timeout_sec": 1, 00:17:51.965 "ack_timeout": 0, 00:17:51.965 "data_wr_pool_size": 0 00:17:51.965 } 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "nbd", 00:17:51.965 "config": [] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "vhost_blk", 00:17:51.965 "config": [] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "scsi", 00:17:51.965 "config": null 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "iscsi", 00:17:51.965 "config": [ 00:17:51.965 { 00:17:51.965 "method": "iscsi_set_options", 00:17:51.965 "params": { 00:17:51.965 "node_base": "iqn.2016-06.io.spdk", 00:17:51.965 "max_sessions": 128, 00:17:51.965 "max_connections_per_session": 2, 00:17:51.965 "max_queue_depth": 64, 00:17:51.965 "default_time2wait": 2, 00:17:51.965 "default_time2retain": 20, 00:17:51.965 "first_burst_length": 8192, 00:17:51.965 "immediate_data": true, 00:17:51.965 "allow_duplicated_isid": false, 00:17:51.965 "error_recovery_level": 0, 00:17:51.965 "nop_timeout": 60, 00:17:51.965 "nop_in_interval": 30, 00:17:51.965 "disable_chap": false, 00:17:51.965 "require_chap": false, 00:17:51.965 "mutual_chap": false, 00:17:51.965 "chap_group": 0, 00:17:51.965 "max_large_datain_per_connection": 64, 00:17:51.965 "max_r2t_per_connection": 4, 00:17:51.965 "pdu_pool_size": 36864, 00:17:51.965 "immediate_data_pool_size": 16384, 00:17:51.965 "data_out_pool_size": 2048 00:17:51.965 } 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 }, 00:17:51.965 { 00:17:51.965 "subsystem": "vhost_scsi", 00:17:51.965 "config": [] 00:17:51.965 } 00:17:51.965 ] 00:17:51.965 } 00:17:51.965 01:47:51 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:51.965 01:47:51 -- rpc/skip_rpc.sh@40 -- # killprocess 110958 00:17:51.965 01:47:51 -- common/autotest_common.sh@936 -- # '[' -z 110958 ']' 00:17:51.965 01:47:51 -- common/autotest_common.sh@940 -- # kill -0 110958 00:17:51.965 01:47:51 -- common/autotest_common.sh@941 -- # uname 00:17:51.965 01:47:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.965 01:47:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110958 00:17:51.965 01:47:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:51.965 01:47:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:51.965 killing process with pid 110958 00:17:51.965 01:47:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110958' 00:17:51.965 01:47:51 -- common/autotest_common.sh@955 -- # kill 110958 00:17:51.965 01:47:51 -- common/autotest_common.sh@960 -- # wait 110958 00:17:55.277 01:47:54 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=111022 00:17:55.277 01:47:54 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:55.277 01:47:54 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:18:00.552 01:47:59 -- rpc/skip_rpc.sh@50 -- # killprocess 111022 00:18:00.552 01:47:59 -- common/autotest_common.sh@936 -- # '[' -z 111022 ']' 00:18:00.552 01:47:59 -- common/autotest_common.sh@940 -- # kill -0 111022 00:18:00.552 01:47:59 -- common/autotest_common.sh@941 -- # uname 00:18:00.552 01:47:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.552 01:47:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111022 00:18:00.552 killing process with pid 111022 00:18:00.552 01:47:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.552 01:47:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.552 01:47:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111022' 00:18:00.552 01:47:59 -- common/autotest_common.sh@955 -- # kill 111022 00:18:00.552 01:47:59 -- common/autotest_common.sh@960 -- # wait 111022 00:18:02.451 01:48:02 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:02.451 01:48:02 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:02.451 00:18:02.451 real 0m11.896s 00:18:02.451 user 0m11.422s 00:18:02.451 sys 0m0.810s 00:18:02.451 01:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:02.451 01:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:02.451 ************************************ 00:18:02.451 END TEST skip_rpc_with_json 00:18:02.451 ************************************ 00:18:02.451 01:48:02 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:18:02.451 01:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:02.451 01:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.451 01:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:02.451 ************************************ 00:18:02.451 START TEST skip_rpc_with_delay 00:18:02.451 ************************************ 00:18:02.451 01:48:02 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:18:02.451 01:48:02 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:02.451 01:48:02 -- common/autotest_common.sh@638 -- # local es=0 00:18:02.451 01:48:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:02.451 01:48:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.451 01:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.451 01:48:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.451 01:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.451 01:48:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.451 01:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.451 01:48:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.451 01:48:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:02.451 01:48:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:02.451 [2024-04-24 01:48:02.524655] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:18:02.451 [2024-04-24 01:48:02.524905] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:18:02.709 01:48:02 -- common/autotest_common.sh@641 -- # es=1 00:18:02.709 01:48:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:02.709 01:48:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:02.709 01:48:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:02.709 00:18:02.709 real 0m0.146s 00:18:02.709 user 0m0.067s 00:18:02.709 sys 0m0.077s 00:18:02.709 01:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:02.709 01:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 ************************************ 00:18:02.709 END TEST skip_rpc_with_delay 00:18:02.709 ************************************ 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@77 -- # uname 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:18:02.709 01:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:02.709 01:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.709 01:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 ************************************ 00:18:02.709 START TEST exit_on_failed_rpc_init 00:18:02.709 ************************************ 00:18:02.709 01:48:02 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=111187 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:02.709 01:48:02 -- rpc/skip_rpc.sh@63 -- # waitforlisten 111187 00:18:02.709 01:48:02 -- common/autotest_common.sh@817 -- # '[' -z 111187 ']' 00:18:02.709 01:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.709 01:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.709 01:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.709 01:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.709 01:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 [2024-04-24 01:48:02.744490] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:02.709 [2024-04-24 01:48:02.744717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111187 ] 00:18:02.965 [2024-04-24 01:48:02.913047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.221 [2024-04-24 01:48:03.170326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.187 01:48:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.187 01:48:04 -- common/autotest_common.sh@850 -- # return 0 00:18:04.187 01:48:04 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:04.187 01:48:04 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:04.187 01:48:04 -- common/autotest_common.sh@638 -- # local es=0 00:18:04.187 01:48:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:04.187 01:48:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.187 01:48:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.187 01:48:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.187 01:48:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.187 01:48:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.187 01:48:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.187 01:48:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.187 01:48:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:04.187 01:48:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:04.187 [2024-04-24 01:48:04.236990] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:04.187 [2024-04-24 01:48:04.237241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111209 ] 00:18:04.444 [2024-04-24 01:48:04.413183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.702 [2024-04-24 01:48:04.664365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.702 [2024-04-24 01:48:04.664509] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:04.702 [2024-04-24 01:48:04.664566] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:04.702 [2024-04-24 01:48:04.664601] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:05.268 01:48:05 -- common/autotest_common.sh@641 -- # es=234 00:18:05.268 01:48:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:05.268 01:48:05 -- common/autotest_common.sh@650 -- # es=106 00:18:05.268 01:48:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:05.268 01:48:05 -- common/autotest_common.sh@658 -- # es=1 00:18:05.268 01:48:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:05.268 01:48:05 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:05.268 01:48:05 -- rpc/skip_rpc.sh@70 -- # killprocess 111187 00:18:05.268 01:48:05 -- common/autotest_common.sh@936 -- # '[' -z 111187 ']' 00:18:05.268 01:48:05 -- common/autotest_common.sh@940 -- # kill -0 111187 00:18:05.268 01:48:05 -- common/autotest_common.sh@941 -- # uname 00:18:05.268 01:48:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.268 01:48:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111187 00:18:05.268 01:48:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:05.268 01:48:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:05.268 killing process with pid 111187 00:18:05.268 01:48:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111187' 00:18:05.268 01:48:05 -- common/autotest_common.sh@955 -- # kill 111187 00:18:05.268 01:48:05 -- common/autotest_common.sh@960 -- # wait 111187 00:18:07.798 00:18:07.798 real 0m5.165s 00:18:07.798 user 0m5.960s 00:18:07.798 sys 0m0.611s 00:18:07.798 ************************************ 00:18:07.798 END TEST exit_on_failed_rpc_init 00:18:07.798 ************************************ 00:18:07.798 01:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.798 01:48:07 -- common/autotest_common.sh@10 -- # set +x 00:18:07.798 01:48:07 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:08.056 00:18:08.056 real 0m25.412s 00:18:08.056 user 0m24.955s 00:18:08.056 sys 0m2.117s 00:18:08.056 01:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.056 01:48:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.056 ************************************ 00:18:08.056 END TEST skip_rpc 00:18:08.056 ************************************ 00:18:08.056 01:48:07 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:08.056 01:48:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:08.056 01:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.056 01:48:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.056 ************************************ 00:18:08.056 START TEST rpc_client 00:18:08.056 ************************************ 00:18:08.056 01:48:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:08.056 * Looking for test storage... 00:18:08.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:18:08.056 01:48:08 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:18:08.315 OK 00:18:08.316 01:48:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:18:08.316 00:18:08.316 real 0m0.180s 00:18:08.316 user 0m0.077s 00:18:08.316 sys 0m0.117s 00:18:08.316 01:48:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.316 01:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 ************************************ 00:18:08.316 END TEST rpc_client 00:18:08.316 ************************************ 00:18:08.316 01:48:08 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:08.316 01:48:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:08.316 01:48:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.316 01:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 ************************************ 00:18:08.316 START TEST json_config 00:18:08.316 ************************************ 00:18:08.316 01:48:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:08.316 01:48:08 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.316 01:48:08 -- nvmf/common.sh@7 -- # uname -s 00:18:08.316 01:48:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.316 01:48:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.316 01:48:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.316 01:48:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.316 01:48:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.316 01:48:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.316 01:48:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.316 01:48:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.316 01:48:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.316 01:48:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.316 01:48:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6761dc03-21f9-41a9-861b-460638ac0cad 00:18:08.316 01:48:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=6761dc03-21f9-41a9-861b-460638ac0cad 00:18:08.316 01:48:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.316 01:48:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.316 01:48:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:08.316 01:48:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.316 01:48:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.316 01:48:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.316 01:48:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.316 01:48:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.316 01:48:08 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:08.316 01:48:08 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:08.316 01:48:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:08.316 01:48:08 -- paths/export.sh@5 -- # export PATH 00:18:08.316 01:48:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:08.316 01:48:08 -- nvmf/common.sh@47 -- # : 0 00:18:08.316 01:48:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.316 01:48:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.316 01:48:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.316 01:48:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.316 01:48:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.316 01:48:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.316 01:48:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.316 01:48:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.316 01:48:08 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:08.316 01:48:08 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:18:08.316 01:48:08 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:18:08.316 01:48:08 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:18:08.316 01:48:08 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:18:08.316 01:48:08 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:18:08.316 01:48:08 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:18:08.316 01:48:08 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:18:08.316 01:48:08 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:18:08.316 01:48:08 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:18:08.316 01:48:08 -- json_config/json_config.sh@33 -- # declare -A app_params 00:18:08.316 01:48:08 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:18:08.316 01:48:08 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:18:08.316 01:48:08 -- json_config/json_config.sh@40 -- # last_event_id=0 00:18:08.316 01:48:08 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:08.316 INFO: JSON configuration test init 00:18:08.316 01:48:08 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:18:08.316 01:48:08 -- json_config/json_config.sh@357 -- # json_config_test_init 00:18:08.316 01:48:08 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:18:08.316 01:48:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:08.316 01:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 01:48:08 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:18:08.316 01:48:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:08.316 01:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 01:48:08 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:18:08.316 01:48:08 -- json_config/common.sh@9 -- # local app=target 00:18:08.316 01:48:08 -- json_config/common.sh@10 -- # shift 00:18:08.316 01:48:08 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:08.316 01:48:08 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:08.316 01:48:08 -- json_config/common.sh@15 -- # local app_extra_params= 00:18:08.316 01:48:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:08.316 01:48:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:08.316 01:48:08 -- json_config/common.sh@22 -- # app_pid["$app"]=111389 00:18:08.316 Waiting for target to run... 00:18:08.316 01:48:08 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:08.316 01:48:08 -- json_config/common.sh@25 -- # waitforlisten 111389 /var/tmp/spdk_tgt.sock 00:18:08.316 01:48:08 -- common/autotest_common.sh@817 -- # '[' -z 111389 ']' 00:18:08.316 01:48:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:08.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:08.316 01:48:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.316 01:48:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:08.316 01:48:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.316 01:48:08 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:18:08.316 01:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:08.575 [2024-04-24 01:48:08.453735] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:08.575 [2024-04-24 01:48:08.453955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111389 ] 00:18:08.837 [2024-04-24 01:48:08.891183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.094 [2024-04-24 01:48:09.170851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.660 01:48:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.660 00:18:09.660 01:48:09 -- common/autotest_common.sh@850 -- # return 0 00:18:09.660 01:48:09 -- json_config/common.sh@26 -- # echo '' 00:18:09.660 01:48:09 -- json_config/json_config.sh@269 -- # create_accel_config 00:18:09.660 01:48:09 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:18:09.660 01:48:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:09.660 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:18:09.660 01:48:09 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:18:09.660 01:48:09 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:18:09.660 01:48:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:09.660 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:18:09.660 01:48:09 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:18:09.660 01:48:09 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:18:09.660 01:48:09 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:18:10.596 01:48:10 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:18:10.596 01:48:10 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:18:10.596 01:48:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:10.596 01:48:10 -- common/autotest_common.sh@10 -- # set +x 00:18:10.596 01:48:10 -- json_config/json_config.sh@45 -- # local ret=0 00:18:10.596 01:48:10 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:18:10.596 01:48:10 -- json_config/json_config.sh@46 -- # local enabled_types 00:18:10.596 01:48:10 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:18:10.596 01:48:10 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:18:10.596 01:48:10 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:18:10.856 01:48:10 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:18:10.856 01:48:10 -- json_config/json_config.sh@48 -- # local get_types 00:18:10.856 01:48:10 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:18:10.856 01:48:10 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:18:10.856 01:48:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:10.856 01:48:10 -- common/autotest_common.sh@10 -- # set +x 00:18:10.856 01:48:10 -- json_config/json_config.sh@55 -- # return 0 00:18:10.856 01:48:10 -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:18:10.856 01:48:10 -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:18:10.856 01:48:10 -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:18:10.856 01:48:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:10.856 01:48:10 -- common/autotest_common.sh@10 -- # set +x 00:18:10.856 01:48:10 -- json_config/json_config.sh@107 -- # expected_notifications=() 00:18:10.856 01:48:10 -- json_config/json_config.sh@107 -- # local expected_notifications 00:18:10.856 01:48:10 -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:18:10.856 01:48:10 -- json_config/json_config.sh@111 -- # get_notifications 00:18:10.856 01:48:10 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:18:10.856 01:48:10 -- json_config/json_config.sh@61 -- # IFS=: 00:18:10.856 01:48:10 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:10.856 01:48:10 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:18:10.856 01:48:10 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:18:10.856 01:48:10 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:18:11.114 01:48:11 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:18:11.114 01:48:11 -- json_config/json_config.sh@61 -- # IFS=: 00:18:11.114 01:48:11 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:11.114 01:48:11 -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:18:11.114 01:48:11 -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:18:11.114 01:48:11 -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:18:11.114 01:48:11 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:18:11.372 Nvme0n1p0 Nvme0n1p1 00:18:11.372 01:48:11 -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:18:11.372 01:48:11 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:18:11.630 [2024-04-24 01:48:11.598018] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:18:11.630 [2024-04-24 01:48:11.598112] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:18:11.630 00:18:11.630 01:48:11 -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:18:11.630 01:48:11 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:18:11.889 Malloc3 00:18:11.889 01:48:11 -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:18:11.889 01:48:11 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:18:12.147 [2024-04-24 01:48:11.992046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:12.147 [2024-04-24 01:48:11.992184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.147 [2024-04-24 01:48:11.992226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.147 [2024-04-24 01:48:11.992247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.147 [2024-04-24 01:48:11.994699] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.147 [2024-04-24 01:48:11.994761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:18:12.147 PTBdevFromMalloc3 00:18:12.147 01:48:12 -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:18:12.147 01:48:12 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:18:12.404 Null0 00:18:12.404 01:48:12 -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:18:12.404 01:48:12 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:18:12.404 Malloc0 00:18:12.701 01:48:12 -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:18:12.701 01:48:12 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:18:13.002 Malloc1 00:18:13.002 01:48:12 -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:18:13.002 01:48:12 -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:18:13.261 102400+0 records in 00:18:13.261 102400+0 records out 00:18:13.261 104857600 bytes (105 MB, 100 MiB) copied, 0.352174 s, 298 MB/s 00:18:13.261 01:48:13 -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:18:13.261 01:48:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:18:13.520 aio_disk 00:18:13.520 01:48:13 -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:18:13.520 01:48:13 -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:18:13.520 01:48:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:18:13.778 80d295ad-de34-4e2a-ad09-ee28c3940a5c 00:18:13.778 01:48:13 -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:18:13.778 01:48:13 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:18:13.778 01:48:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:18:14.037 01:48:13 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:18:14.037 01:48:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:18:14.295 01:48:14 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:18:14.295 01:48:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:18:14.553 01:48:14 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:18:14.553 01:48:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:18:14.812 01:48:14 -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:18:14.812 01:48:14 -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:18:14.812 01:48:14 -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 00:18:14.812 01:48:14 -- json_config/json_config.sh@67 -- # local events_to_check 00:18:14.812 01:48:14 -- json_config/json_config.sh@68 -- # local recorded_events 00:18:14.812 01:48:14 -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:18:14.812 01:48:14 -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 00:18:14.812 01:48:14 -- json_config/json_config.sh@71 -- # sort 00:18:14.812 01:48:14 -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:18:14.812 01:48:14 -- json_config/json_config.sh@72 -- # get_notifications 00:18:14.812 01:48:14 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:18:14.812 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:14.812 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:14.812 01:48:14 -- json_config/json_config.sh@72 -- # sort 00:18:14.812 01:48:14 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:18:14.812 01:48:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:18:14.812 01:48:14 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.071 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.071 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@62 -- # echo bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # IFS=: 00:18:15.072 01:48:14 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:18:15.072 01:48:14 -- json_config/json_config.sh@74 -- # [[ bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 bdev_register:aio_disk bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\9\e\8\0\8\d\d\-\3\1\7\1\-\4\d\b\6\-\8\9\2\f\-\d\9\7\0\2\b\2\d\d\7\2\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\1\f\e\1\3\7\2\-\0\7\5\2\-\4\1\8\e\-\9\b\a\4\-\5\0\1\f\c\7\8\d\8\6\4\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\a\4\6\d\4\3\2\-\7\1\d\8\-\4\b\c\8\-\9\a\4\b\-\e\8\c\0\5\c\1\5\e\3\3\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\0\a\f\a\a\a\5\-\e\2\c\c\-\4\e\f\8\-\8\8\e\c\-\8\9\d\0\6\2\a\f\8\b\6\8 ]] 00:18:15.072 01:48:14 -- json_config/json_config.sh@86 -- # cat 00:18:15.072 01:48:14 -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 bdev_register:aio_disk bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 00:18:15.072 Expected events matched: 00:18:15.072 bdev_register:79e808dd-3171-4db6-892f-d9702b2dd72f 00:18:15.072 bdev_register:91fe1372-0752-418e-9ba4-501fc78d8645 00:18:15.072 bdev_register:Malloc0 00:18:15.072 bdev_register:Malloc0p0 00:18:15.072 bdev_register:Malloc0p1 00:18:15.072 bdev_register:Malloc0p2 00:18:15.072 bdev_register:Malloc1 00:18:15.072 bdev_register:Malloc3 00:18:15.072 bdev_register:Null0 00:18:15.072 bdev_register:Nvme0n1 00:18:15.072 bdev_register:Nvme0n1p0 00:18:15.072 bdev_register:Nvme0n1p1 00:18:15.072 bdev_register:PTBdevFromMalloc3 00:18:15.072 bdev_register:aa46d432-71d8-4bc8-9a4b-e8c05c15e331 00:18:15.072 bdev_register:aio_disk 00:18:15.072 bdev_register:b0afaaa5-e2cc-4ef8-88ec-89d062af8b68 00:18:15.072 01:48:14 -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:18:15.072 01:48:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.072 01:48:14 -- common/autotest_common.sh@10 -- # set +x 00:18:15.072 01:48:15 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:18:15.072 01:48:15 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:18:15.072 01:48:15 -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:18:15.072 01:48:15 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:18:15.072 01:48:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.072 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:18:15.072 01:48:15 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:18:15.072 01:48:15 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:18:15.072 01:48:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:18:15.330 MallocBdevForConfigChangeCheck 00:18:15.330 01:48:15 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:18:15.330 01:48:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.330 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:18:15.330 01:48:15 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:18:15.330 01:48:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:15.895 INFO: shutting down applications... 00:18:15.895 01:48:15 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:18:15.895 01:48:15 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:18:15.895 01:48:15 -- json_config/json_config.sh@368 -- # json_config_clear target 00:18:15.895 01:48:15 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:18:15.895 01:48:15 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:18:16.152 [2024-04-24 01:48:16.018355] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:18:16.152 Calling clear_vhost_scsi_subsystem 00:18:16.152 Calling clear_iscsi_subsystem 00:18:16.152 Calling clear_vhost_blk_subsystem 00:18:16.152 Calling clear_nbd_subsystem 00:18:16.152 Calling clear_nvmf_subsystem 00:18:16.152 Calling clear_bdev_subsystem 00:18:16.152 01:48:16 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:18:16.152 01:48:16 -- json_config/json_config.sh@343 -- # count=100 00:18:16.152 01:48:16 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:18:16.152 01:48:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:16.152 01:48:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:18:16.152 01:48:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:18:16.718 01:48:16 -- json_config/json_config.sh@345 -- # break 00:18:16.718 01:48:16 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:18:16.718 01:48:16 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:18:16.718 01:48:16 -- json_config/common.sh@31 -- # local app=target 00:18:16.718 01:48:16 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:16.718 01:48:16 -- json_config/common.sh@35 -- # [[ -n 111389 ]] 00:18:16.718 01:48:16 -- json_config/common.sh@38 -- # kill -SIGINT 111389 00:18:16.718 01:48:16 -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:16.718 01:48:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:16.718 01:48:16 -- json_config/common.sh@41 -- # kill -0 111389 00:18:16.718 01:48:16 -- json_config/common.sh@45 -- # sleep 0.5 00:18:17.284 01:48:17 -- json_config/common.sh@40 -- # (( i++ )) 00:18:17.284 01:48:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:17.284 01:48:17 -- json_config/common.sh@41 -- # kill -0 111389 00:18:17.284 01:48:17 -- json_config/common.sh@45 -- # sleep 0.5 00:18:17.852 01:48:17 -- json_config/common.sh@40 -- # (( i++ )) 00:18:17.852 01:48:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:17.852 01:48:17 -- json_config/common.sh@41 -- # kill -0 111389 00:18:17.852 01:48:17 -- json_config/common.sh@45 -- # sleep 0.5 00:18:18.418 01:48:18 -- json_config/common.sh@40 -- # (( i++ )) 00:18:18.418 01:48:18 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:18.418 01:48:18 -- json_config/common.sh@41 -- # kill -0 111389 00:18:18.418 01:48:18 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:18.418 01:48:18 -- json_config/common.sh@43 -- # break 00:18:18.418 01:48:18 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:18.418 01:48:18 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:18.418 SPDK target shutdown done 00:18:18.418 01:48:18 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:18:18.418 INFO: relaunching applications... 00:18:18.418 01:48:18 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:18.418 01:48:18 -- json_config/common.sh@9 -- # local app=target 00:18:18.418 01:48:18 -- json_config/common.sh@10 -- # shift 00:18:18.418 01:48:18 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:18.418 01:48:18 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:18.418 01:48:18 -- json_config/common.sh@15 -- # local app_extra_params= 00:18:18.418 01:48:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:18.418 01:48:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:18.418 01:48:18 -- json_config/common.sh@22 -- # app_pid["$app"]=111668 00:18:18.418 01:48:18 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:18.418 01:48:18 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:18.418 Waiting for target to run... 00:18:18.418 01:48:18 -- json_config/common.sh@25 -- # waitforlisten 111668 /var/tmp/spdk_tgt.sock 00:18:18.418 01:48:18 -- common/autotest_common.sh@817 -- # '[' -z 111668 ']' 00:18:18.418 01:48:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:18.418 01:48:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.418 01:48:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:18.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:18.418 01:48:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.418 01:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:18.418 [2024-04-24 01:48:18.291519] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:18.418 [2024-04-24 01:48:18.291916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111668 ] 00:18:18.676 [2024-04-24 01:48:18.729110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.933 [2024-04-24 01:48:18.989279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.867 [2024-04-24 01:48:19.782323] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:18:19.867 [2024-04-24 01:48:19.782663] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:18:19.867 [2024-04-24 01:48:19.790332] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:18:19.867 [2024-04-24 01:48:19.790608] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:18:19.867 [2024-04-24 01:48:19.798332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:19.867 [2024-04-24 01:48:19.798609] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:18:19.867 [2024-04-24 01:48:19.798783] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:18:19.867 [2024-04-24 01:48:19.895060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:19.867 [2024-04-24 01:48:19.895396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.867 [2024-04-24 01:48:19.895487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:19.867 [2024-04-24 01:48:19.895635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.867 [2024-04-24 01:48:19.896355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.867 [2024-04-24 01:48:19.896543] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:18:20.126 00:18:20.126 INFO: Checking if target configuration is the same... 00:18:20.126 01:48:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.126 01:48:20 -- common/autotest_common.sh@850 -- # return 0 00:18:20.126 01:48:20 -- json_config/common.sh@26 -- # echo '' 00:18:20.126 01:48:20 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:18:20.126 01:48:20 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:18:20.126 01:48:20 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:20.126 01:48:20 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:18:20.126 01:48:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:20.126 + '[' 2 -ne 2 ']' 00:18:20.126 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:18:20.126 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:18:20.126 + rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.126 +++ basename /dev/fd/62 00:18:20.126 ++ mktemp /tmp/62.XXX 00:18:20.126 + tmp_file_1=/tmp/62.2gN 00:18:20.126 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:20.126 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:18:20.126 + tmp_file_2=/tmp/spdk_tgt_config.json.hja 00:18:20.126 + ret=0 00:18:20.126 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:18:20.694 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:18:20.694 + diff -u /tmp/62.2gN /tmp/spdk_tgt_config.json.hja 00:18:20.694 + echo 'INFO: JSON config files are the same' 00:18:20.694 INFO: JSON config files are the same 00:18:20.694 + rm /tmp/62.2gN /tmp/spdk_tgt_config.json.hja 00:18:20.694 + exit 0 00:18:20.694 INFO: changing configuration and checking if this can be detected... 00:18:20.694 01:48:20 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:18:20.694 01:48:20 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:18:20.694 01:48:20 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:18:20.694 01:48:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:18:20.952 01:48:20 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:18:20.952 01:48:20 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:20.952 01:48:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:20.952 + '[' 2 -ne 2 ']' 00:18:20.952 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:18:20.952 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:18:20.952 + rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.952 +++ basename /dev/fd/62 00:18:20.952 ++ mktemp /tmp/62.XXX 00:18:20.952 + tmp_file_1=/tmp/62.Ed3 00:18:20.952 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:20.952 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:18:20.952 + tmp_file_2=/tmp/spdk_tgt_config.json.PLO 00:18:20.952 + ret=0 00:18:20.952 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:18:21.211 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:18:21.211 + diff -u /tmp/62.Ed3 /tmp/spdk_tgt_config.json.PLO 00:18:21.469 + ret=1 00:18:21.469 + echo '=== Start of file: /tmp/62.Ed3 ===' 00:18:21.469 + cat /tmp/62.Ed3 00:18:21.469 + echo '=== End of file: /tmp/62.Ed3 ===' 00:18:21.469 + echo '' 00:18:21.469 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PLO ===' 00:18:21.469 + cat /tmp/spdk_tgt_config.json.PLO 00:18:21.469 + echo '=== End of file: /tmp/spdk_tgt_config.json.PLO ===' 00:18:21.469 + echo '' 00:18:21.469 + rm /tmp/62.Ed3 /tmp/spdk_tgt_config.json.PLO 00:18:21.469 + exit 1 00:18:21.469 INFO: configuration change detected. 00:18:21.469 01:48:21 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:18:21.469 01:48:21 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:18:21.469 01:48:21 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:18:21.469 01:48:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.469 01:48:21 -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 01:48:21 -- json_config/json_config.sh@307 -- # local ret=0 00:18:21.469 01:48:21 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:18:21.469 01:48:21 -- json_config/json_config.sh@317 -- # [[ -n 111668 ]] 00:18:21.469 01:48:21 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:18:21.469 01:48:21 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:18:21.469 01:48:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.469 01:48:21 -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 01:48:21 -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:18:21.469 01:48:21 -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:18:21.469 01:48:21 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:18:21.469 01:48:21 -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:18:21.469 01:48:21 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:18:21.728 01:48:21 -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:18:21.728 01:48:21 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:18:22.296 01:48:22 -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:18:22.296 01:48:22 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:18:22.575 01:48:22 -- json_config/json_config.sh@193 -- # uname -s 00:18:22.575 01:48:22 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:18:22.575 01:48:22 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:18:22.575 01:48:22 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:18:22.575 01:48:22 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:18:22.575 01:48:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.575 01:48:22 -- common/autotest_common.sh@10 -- # set +x 00:18:22.575 01:48:22 -- json_config/json_config.sh@323 -- # killprocess 111668 00:18:22.575 01:48:22 -- common/autotest_common.sh@936 -- # '[' -z 111668 ']' 00:18:22.575 01:48:22 -- common/autotest_common.sh@940 -- # kill -0 111668 00:18:22.575 01:48:22 -- common/autotest_common.sh@941 -- # uname 00:18:22.575 01:48:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.575 01:48:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111668 00:18:22.575 killing process with pid 111668 00:18:22.575 01:48:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.575 01:48:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.575 01:48:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111668' 00:18:22.575 01:48:22 -- common/autotest_common.sh@955 -- # kill 111668 00:18:22.575 01:48:22 -- common/autotest_common.sh@960 -- # wait 111668 00:18:23.947 01:48:23 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:18:23.947 01:48:23 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:18:23.947 01:48:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:23.947 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:18:23.947 01:48:23 -- json_config/json_config.sh@328 -- # return 0 00:18:23.947 01:48:23 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:18:23.947 INFO: Success 00:18:23.947 ************************************ 00:18:23.947 END TEST json_config 00:18:23.947 ************************************ 00:18:23.947 00:18:23.947 real 0m15.440s 00:18:23.947 user 0m21.490s 00:18:23.947 sys 0m2.762s 00:18:23.947 01:48:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.947 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:18:23.947 01:48:23 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:23.947 01:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:23.947 01:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.947 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:18:23.947 ************************************ 00:18:23.947 START TEST json_config_extra_key 00:18:23.947 ************************************ 00:18:23.947 01:48:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:23.947 01:48:23 -- nvmf/common.sh@7 -- # uname -s 00:18:23.947 01:48:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.947 01:48:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.947 01:48:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.947 01:48:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.947 01:48:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.947 01:48:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.947 01:48:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.947 01:48:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.947 01:48:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.947 01:48:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.947 01:48:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49f871ee-0747-4e18-9f61-49083cceec1d 00:18:23.947 01:48:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=49f871ee-0747-4e18-9f61-49083cceec1d 00:18:23.947 01:48:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.947 01:48:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.947 01:48:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:23.947 01:48:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.947 01:48:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:23.947 01:48:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.947 01:48:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.947 01:48:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.947 01:48:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:23.947 01:48:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:23.947 01:48:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:23.947 01:48:23 -- paths/export.sh@5 -- # export PATH 00:18:23.947 01:48:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:18:23.947 01:48:23 -- nvmf/common.sh@47 -- # : 0 00:18:23.947 01:48:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.947 01:48:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.947 01:48:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.947 01:48:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.947 01:48:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.947 01:48:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.947 01:48:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.947 01:48:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:18:23.947 INFO: launching applications... 00:18:23.947 01:48:23 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:23.947 01:48:23 -- json_config/common.sh@9 -- # local app=target 00:18:23.947 01:48:23 -- json_config/common.sh@10 -- # shift 00:18:23.947 01:48:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:23.947 01:48:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:23.947 01:48:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:18:23.947 01:48:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:23.947 01:48:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:23.947 01:48:23 -- json_config/common.sh@22 -- # app_pid["$app"]=111862 00:18:23.947 01:48:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:23.947 Waiting for target to run... 00:18:23.947 01:48:23 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:23.947 01:48:23 -- json_config/common.sh@25 -- # waitforlisten 111862 /var/tmp/spdk_tgt.sock 00:18:23.947 01:48:23 -- common/autotest_common.sh@817 -- # '[' -z 111862 ']' 00:18:23.947 01:48:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:23.947 01:48:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.947 01:48:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:23.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:23.947 01:48:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.947 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:18:23.948 [2024-04-24 01:48:23.949665] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:23.948 [2024-04-24 01:48:23.950245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111862 ] 00:18:24.513 [2024-04-24 01:48:24.432759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.771 [2024-04-24 01:48:24.673224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.704 01:48:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.704 01:48:25 -- common/autotest_common.sh@850 -- # return 0 00:18:25.704 01:48:25 -- json_config/common.sh@26 -- # echo '' 00:18:25.704 00:18:25.704 01:48:25 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:18:25.704 INFO: shutting down applications... 00:18:25.704 01:48:25 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:18:25.704 01:48:25 -- json_config/common.sh@31 -- # local app=target 00:18:25.704 01:48:25 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:25.704 01:48:25 -- json_config/common.sh@35 -- # [[ -n 111862 ]] 00:18:25.704 01:48:25 -- json_config/common.sh@38 -- # kill -SIGINT 111862 00:18:25.704 01:48:25 -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:25.704 01:48:25 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:25.704 01:48:25 -- json_config/common.sh@41 -- # kill -0 111862 00:18:25.704 01:48:25 -- json_config/common.sh@45 -- # sleep 0.5 00:18:25.964 01:48:25 -- json_config/common.sh@40 -- # (( i++ )) 00:18:25.964 01:48:25 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:25.964 01:48:25 -- json_config/common.sh@41 -- # kill -0 111862 00:18:25.964 01:48:25 -- json_config/common.sh@45 -- # sleep 0.5 00:18:26.530 01:48:26 -- json_config/common.sh@40 -- # (( i++ )) 00:18:26.530 01:48:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:26.530 01:48:26 -- json_config/common.sh@41 -- # kill -0 111862 00:18:26.530 01:48:26 -- json_config/common.sh@45 -- # sleep 0.5 00:18:27.097 01:48:26 -- json_config/common.sh@40 -- # (( i++ )) 00:18:27.097 01:48:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:27.097 01:48:26 -- json_config/common.sh@41 -- # kill -0 111862 00:18:27.097 01:48:26 -- json_config/common.sh@45 -- # sleep 0.5 00:18:27.664 01:48:27 -- json_config/common.sh@40 -- # (( i++ )) 00:18:27.664 01:48:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:27.664 01:48:27 -- json_config/common.sh@41 -- # kill -0 111862 00:18:27.664 01:48:27 -- json_config/common.sh@45 -- # sleep 0.5 00:18:28.230 01:48:28 -- json_config/common.sh@40 -- # (( i++ )) 00:18:28.230 01:48:28 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:28.230 01:48:28 -- json_config/common.sh@41 -- # kill -0 111862 00:18:28.230 01:48:28 -- json_config/common.sh@45 -- # sleep 0.5 00:18:28.489 01:48:28 -- json_config/common.sh@40 -- # (( i++ )) 00:18:28.489 01:48:28 -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:28.489 01:48:28 -- json_config/common.sh@41 -- # kill -0 111862 00:18:28.489 SPDK target shutdown done 00:18:28.489 Success 00:18:28.489 01:48:28 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:28.489 01:48:28 -- json_config/common.sh@43 -- # break 00:18:28.489 01:48:28 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:28.489 01:48:28 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:28.489 01:48:28 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:18:28.489 ************************************ 00:18:28.489 END TEST json_config_extra_key 00:18:28.489 ************************************ 00:18:28.489 00:18:28.489 real 0m4.742s 00:18:28.489 user 0m4.630s 00:18:28.489 sys 0m0.545s 00:18:28.489 01:48:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.489 01:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:28.489 01:48:28 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:28.489 01:48:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:28.489 01:48:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.748 01:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:28.748 ************************************ 00:18:28.748 START TEST alias_rpc 00:18:28.748 ************************************ 00:18:28.748 01:48:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:28.748 * Looking for test storage... 00:18:28.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:18:28.748 01:48:28 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:28.748 01:48:28 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=111979 00:18:28.748 01:48:28 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 111979 00:18:28.748 01:48:28 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.748 01:48:28 -- common/autotest_common.sh@817 -- # '[' -z 111979 ']' 00:18:28.748 01:48:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.748 01:48:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.748 01:48:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.748 01:48:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.748 01:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:28.748 [2024-04-24 01:48:28.811541] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:28.748 [2024-04-24 01:48:28.811999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111979 ] 00:18:29.007 [2024-04-24 01:48:28.994578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.265 [2024-04-24 01:48:29.273370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.202 01:48:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:30.202 01:48:30 -- common/autotest_common.sh@850 -- # return 0 00:18:30.202 01:48:30 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:18:30.459 01:48:30 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 111979 00:18:30.459 01:48:30 -- common/autotest_common.sh@936 -- # '[' -z 111979 ']' 00:18:30.459 01:48:30 -- common/autotest_common.sh@940 -- # kill -0 111979 00:18:30.459 01:48:30 -- common/autotest_common.sh@941 -- # uname 00:18:30.459 01:48:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.459 01:48:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111979 00:18:30.459 01:48:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:30.459 killing process with pid 111979 00:18:30.459 01:48:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:30.459 01:48:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111979' 00:18:30.459 01:48:30 -- common/autotest_common.sh@955 -- # kill 111979 00:18:30.459 01:48:30 -- common/autotest_common.sh@960 -- # wait 111979 00:18:33.022 ************************************ 00:18:33.022 END TEST alias_rpc 00:18:33.022 ************************************ 00:18:33.022 00:18:33.022 real 0m4.468s 00:18:33.022 user 0m4.695s 00:18:33.022 sys 0m0.563s 00:18:33.022 01:48:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:33.022 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 01:48:33 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:18:33.281 01:48:33 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:33.281 01:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:33.281 01:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.281 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 ************************************ 00:18:33.281 START TEST spdkcli_tcp 00:18:33.281 ************************************ 00:18:33.281 01:48:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:33.281 * Looking for test storage... 00:18:33.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:33.281 01:48:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:33.281 01:48:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:18:33.281 01:48:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:33.281 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=112107 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:33.281 01:48:33 -- spdkcli/tcp.sh@27 -- # waitforlisten 112107 00:18:33.281 01:48:33 -- common/autotest_common.sh@817 -- # '[' -z 112107 ']' 00:18:33.281 01:48:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.281 01:48:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.281 01:48:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.281 01:48:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.281 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 [2024-04-24 01:48:33.373556] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:33.540 [2024-04-24 01:48:33.374103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112107 ] 00:18:33.540 [2024-04-24 01:48:33.547605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:33.797 [2024-04-24 01:48:33.833094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.797 [2024-04-24 01:48:33.833166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.173 01:48:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.173 01:48:34 -- common/autotest_common.sh@850 -- # return 0 00:18:35.173 01:48:34 -- spdkcli/tcp.sh@31 -- # socat_pid=112129 00:18:35.173 01:48:34 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:18:35.173 01:48:34 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:18:35.173 [ 00:18:35.173 "spdk_get_version", 00:18:35.173 "rpc_get_methods", 00:18:35.173 "keyring_get_keys", 00:18:35.173 "trace_get_info", 00:18:35.173 "trace_get_tpoint_group_mask", 00:18:35.173 "trace_disable_tpoint_group", 00:18:35.173 "trace_enable_tpoint_group", 00:18:35.173 "trace_clear_tpoint_mask", 00:18:35.173 "trace_set_tpoint_mask", 00:18:35.173 "framework_get_pci_devices", 00:18:35.173 "framework_get_config", 00:18:35.173 "framework_get_subsystems", 00:18:35.173 "iobuf_get_stats", 00:18:35.173 "iobuf_set_options", 00:18:35.173 "sock_set_default_impl", 00:18:35.173 "sock_impl_set_options", 00:18:35.173 "sock_impl_get_options", 00:18:35.173 "vmd_rescan", 00:18:35.173 "vmd_remove_device", 00:18:35.173 "vmd_enable", 00:18:35.173 "accel_get_stats", 00:18:35.173 "accel_set_options", 00:18:35.173 "accel_set_driver", 00:18:35.173 "accel_crypto_key_destroy", 00:18:35.173 "accel_crypto_keys_get", 00:18:35.173 "accel_crypto_key_create", 00:18:35.173 "accel_assign_opc", 00:18:35.173 "accel_get_module_info", 00:18:35.173 "accel_get_opc_assignments", 00:18:35.173 "notify_get_notifications", 00:18:35.173 "notify_get_types", 00:18:35.173 "bdev_get_histogram", 00:18:35.173 "bdev_enable_histogram", 00:18:35.174 "bdev_set_qos_limit", 00:18:35.174 "bdev_set_qd_sampling_period", 00:18:35.174 "bdev_get_bdevs", 00:18:35.174 "bdev_reset_iostat", 00:18:35.174 "bdev_get_iostat", 00:18:35.174 "bdev_examine", 00:18:35.174 "bdev_wait_for_examine", 00:18:35.174 "bdev_set_options", 00:18:35.174 "scsi_get_devices", 00:18:35.174 "thread_set_cpumask", 00:18:35.174 "framework_get_scheduler", 00:18:35.174 "framework_set_scheduler", 00:18:35.174 "framework_get_reactors", 00:18:35.174 "thread_get_io_channels", 00:18:35.174 "thread_get_pollers", 00:18:35.174 "thread_get_stats", 00:18:35.174 "framework_monitor_context_switch", 00:18:35.174 "spdk_kill_instance", 00:18:35.174 "log_enable_timestamps", 00:18:35.174 "log_get_flags", 00:18:35.174 "log_clear_flag", 00:18:35.174 "log_set_flag", 00:18:35.174 "log_get_level", 00:18:35.174 "log_set_level", 00:18:35.174 "log_get_print_level", 00:18:35.174 "log_set_print_level", 00:18:35.174 "framework_enable_cpumask_locks", 00:18:35.174 "framework_disable_cpumask_locks", 00:18:35.174 "framework_wait_init", 00:18:35.174 "framework_start_init", 00:18:35.174 "virtio_blk_create_transport", 00:18:35.174 "virtio_blk_get_transports", 00:18:35.174 "vhost_controller_set_coalescing", 00:18:35.174 "vhost_get_controllers", 00:18:35.174 "vhost_delete_controller", 00:18:35.174 "vhost_create_blk_controller", 00:18:35.174 "vhost_scsi_controller_remove_target", 00:18:35.174 "vhost_scsi_controller_add_target", 00:18:35.174 "vhost_start_scsi_controller", 00:18:35.174 "vhost_create_scsi_controller", 00:18:35.174 "nbd_get_disks", 00:18:35.174 "nbd_stop_disk", 00:18:35.174 "nbd_start_disk", 00:18:35.174 "env_dpdk_get_mem_stats", 00:18:35.174 "nvmf_subsystem_get_listeners", 00:18:35.174 "nvmf_subsystem_get_qpairs", 00:18:35.174 "nvmf_subsystem_get_controllers", 00:18:35.174 "nvmf_get_stats", 00:18:35.174 "nvmf_get_transports", 00:18:35.174 "nvmf_create_transport", 00:18:35.174 "nvmf_get_targets", 00:18:35.174 "nvmf_delete_target", 00:18:35.174 "nvmf_create_target", 00:18:35.174 "nvmf_subsystem_allow_any_host", 00:18:35.174 "nvmf_subsystem_remove_host", 00:18:35.174 "nvmf_subsystem_add_host", 00:18:35.174 "nvmf_ns_remove_host", 00:18:35.174 "nvmf_ns_add_host", 00:18:35.174 "nvmf_subsystem_remove_ns", 00:18:35.174 "nvmf_subsystem_add_ns", 00:18:35.174 "nvmf_subsystem_listener_set_ana_state", 00:18:35.174 "nvmf_discovery_get_referrals", 00:18:35.174 "nvmf_discovery_remove_referral", 00:18:35.174 "nvmf_discovery_add_referral", 00:18:35.174 "nvmf_subsystem_remove_listener", 00:18:35.174 "nvmf_subsystem_add_listener", 00:18:35.174 "nvmf_delete_subsystem", 00:18:35.174 "nvmf_create_subsystem", 00:18:35.174 "nvmf_get_subsystems", 00:18:35.174 "nvmf_set_crdt", 00:18:35.174 "nvmf_set_config", 00:18:35.174 "nvmf_set_max_subsystems", 00:18:35.174 "iscsi_get_histogram", 00:18:35.174 "iscsi_enable_histogram", 00:18:35.174 "iscsi_set_options", 00:18:35.174 "iscsi_get_auth_groups", 00:18:35.174 "iscsi_auth_group_remove_secret", 00:18:35.174 "iscsi_auth_group_add_secret", 00:18:35.174 "iscsi_delete_auth_group", 00:18:35.174 "iscsi_create_auth_group", 00:18:35.174 "iscsi_set_discovery_auth", 00:18:35.174 "iscsi_get_options", 00:18:35.174 "iscsi_target_node_request_logout", 00:18:35.174 "iscsi_target_node_set_redirect", 00:18:35.174 "iscsi_target_node_set_auth", 00:18:35.174 "iscsi_target_node_add_lun", 00:18:35.174 "iscsi_get_stats", 00:18:35.174 "iscsi_get_connections", 00:18:35.174 "iscsi_portal_group_set_auth", 00:18:35.174 "iscsi_start_portal_group", 00:18:35.174 "iscsi_delete_portal_group", 00:18:35.174 "iscsi_create_portal_group", 00:18:35.174 "iscsi_get_portal_groups", 00:18:35.174 "iscsi_delete_target_node", 00:18:35.174 "iscsi_target_node_remove_pg_ig_maps", 00:18:35.174 "iscsi_target_node_add_pg_ig_maps", 00:18:35.174 "iscsi_create_target_node", 00:18:35.174 "iscsi_get_target_nodes", 00:18:35.174 "iscsi_delete_initiator_group", 00:18:35.174 "iscsi_initiator_group_remove_initiators", 00:18:35.174 "iscsi_initiator_group_add_initiators", 00:18:35.174 "iscsi_create_initiator_group", 00:18:35.174 "iscsi_get_initiator_groups", 00:18:35.174 "keyring_linux_set_options", 00:18:35.174 "keyring_file_remove_key", 00:18:35.174 "keyring_file_add_key", 00:18:35.174 "iaa_scan_accel_module", 00:18:35.174 "dsa_scan_accel_module", 00:18:35.174 "ioat_scan_accel_module", 00:18:35.174 "accel_error_inject_error", 00:18:35.174 "bdev_iscsi_delete", 00:18:35.174 "bdev_iscsi_create", 00:18:35.174 "bdev_iscsi_set_options", 00:18:35.174 "bdev_virtio_attach_controller", 00:18:35.174 "bdev_virtio_scsi_get_devices", 00:18:35.174 "bdev_virtio_detach_controller", 00:18:35.174 "bdev_virtio_blk_set_hotplug", 00:18:35.174 "bdev_ftl_set_property", 00:18:35.174 "bdev_ftl_get_properties", 00:18:35.174 "bdev_ftl_get_stats", 00:18:35.174 "bdev_ftl_unmap", 00:18:35.174 "bdev_ftl_unload", 00:18:35.174 "bdev_ftl_delete", 00:18:35.174 "bdev_ftl_load", 00:18:35.174 "bdev_ftl_create", 00:18:35.174 "bdev_aio_delete", 00:18:35.174 "bdev_aio_rescan", 00:18:35.174 "bdev_aio_create", 00:18:35.174 "blobfs_create", 00:18:35.174 "blobfs_detect", 00:18:35.174 "blobfs_set_cache_size", 00:18:35.174 "bdev_zone_block_delete", 00:18:35.174 "bdev_zone_block_create", 00:18:35.174 "bdev_delay_delete", 00:18:35.174 "bdev_delay_create", 00:18:35.174 "bdev_delay_update_latency", 00:18:35.174 "bdev_split_delete", 00:18:35.174 "bdev_split_create", 00:18:35.174 "bdev_error_inject_error", 00:18:35.174 "bdev_error_delete", 00:18:35.174 "bdev_error_create", 00:18:35.174 "bdev_raid_set_options", 00:18:35.174 "bdev_raid_remove_base_bdev", 00:18:35.174 "bdev_raid_add_base_bdev", 00:18:35.174 "bdev_raid_delete", 00:18:35.174 "bdev_raid_create", 00:18:35.174 "bdev_raid_get_bdevs", 00:18:35.174 "bdev_lvol_grow_lvstore", 00:18:35.174 "bdev_lvol_get_lvols", 00:18:35.174 "bdev_lvol_get_lvstores", 00:18:35.174 "bdev_lvol_delete", 00:18:35.174 "bdev_lvol_set_read_only", 00:18:35.174 "bdev_lvol_resize", 00:18:35.174 "bdev_lvol_decouple_parent", 00:18:35.174 "bdev_lvol_inflate", 00:18:35.174 "bdev_lvol_rename", 00:18:35.174 "bdev_lvol_clone_bdev", 00:18:35.174 "bdev_lvol_clone", 00:18:35.174 "bdev_lvol_snapshot", 00:18:35.174 "bdev_lvol_create", 00:18:35.174 "bdev_lvol_delete_lvstore", 00:18:35.174 "bdev_lvol_rename_lvstore", 00:18:35.174 "bdev_lvol_create_lvstore", 00:18:35.174 "bdev_passthru_delete", 00:18:35.174 "bdev_passthru_create", 00:18:35.174 "bdev_nvme_cuse_unregister", 00:18:35.174 "bdev_nvme_cuse_register", 00:18:35.174 "bdev_opal_new_user", 00:18:35.174 "bdev_opal_set_lock_state", 00:18:35.174 "bdev_opal_delete", 00:18:35.174 "bdev_opal_get_info", 00:18:35.174 "bdev_opal_create", 00:18:35.174 "bdev_nvme_opal_revert", 00:18:35.174 "bdev_nvme_opal_init", 00:18:35.174 "bdev_nvme_send_cmd", 00:18:35.174 "bdev_nvme_get_path_iostat", 00:18:35.174 "bdev_nvme_get_mdns_discovery_info", 00:18:35.174 "bdev_nvme_stop_mdns_discovery", 00:18:35.174 "bdev_nvme_start_mdns_discovery", 00:18:35.174 "bdev_nvme_set_multipath_policy", 00:18:35.174 "bdev_nvme_set_preferred_path", 00:18:35.174 "bdev_nvme_get_io_paths", 00:18:35.174 "bdev_nvme_remove_error_injection", 00:18:35.174 "bdev_nvme_add_error_injection", 00:18:35.174 "bdev_nvme_get_discovery_info", 00:18:35.174 "bdev_nvme_stop_discovery", 00:18:35.174 "bdev_nvme_start_discovery", 00:18:35.174 "bdev_nvme_get_controller_health_info", 00:18:35.174 "bdev_nvme_disable_controller", 00:18:35.174 "bdev_nvme_enable_controller", 00:18:35.174 "bdev_nvme_reset_controller", 00:18:35.174 "bdev_nvme_get_transport_statistics", 00:18:35.174 "bdev_nvme_apply_firmware", 00:18:35.174 "bdev_nvme_detach_controller", 00:18:35.174 "bdev_nvme_get_controllers", 00:18:35.174 "bdev_nvme_attach_controller", 00:18:35.174 "bdev_nvme_set_hotplug", 00:18:35.174 "bdev_nvme_set_options", 00:18:35.174 "bdev_null_resize", 00:18:35.174 "bdev_null_delete", 00:18:35.174 "bdev_null_create", 00:18:35.174 "bdev_malloc_delete", 00:18:35.174 "bdev_malloc_create" 00:18:35.174 ] 00:18:35.174 01:48:35 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:18:35.174 01:48:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:35.174 01:48:35 -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 01:48:35 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:35.174 01:48:35 -- spdkcli/tcp.sh@38 -- # killprocess 112107 00:18:35.174 01:48:35 -- common/autotest_common.sh@936 -- # '[' -z 112107 ']' 00:18:35.174 01:48:35 -- common/autotest_common.sh@940 -- # kill -0 112107 00:18:35.174 01:48:35 -- common/autotest_common.sh@941 -- # uname 00:18:35.174 01:48:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.175 01:48:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112107 00:18:35.175 01:48:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.175 killing process with pid 112107 00:18:35.175 01:48:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.175 01:48:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112107' 00:18:35.175 01:48:35 -- common/autotest_common.sh@955 -- # kill 112107 00:18:35.175 01:48:35 -- common/autotest_common.sh@960 -- # wait 112107 00:18:37.708 ************************************ 00:18:37.708 END TEST spdkcli_tcp 00:18:37.708 ************************************ 00:18:37.708 00:18:37.708 real 0m4.552s 00:18:37.708 user 0m8.207s 00:18:37.708 sys 0m0.574s 00:18:37.708 01:48:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:37.708 01:48:37 -- common/autotest_common.sh@10 -- # set +x 00:18:37.708 01:48:37 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:37.708 01:48:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:37.708 01:48:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.708 01:48:37 -- common/autotest_common.sh@10 -- # set +x 00:18:37.967 ************************************ 00:18:37.967 START TEST dpdk_mem_utility 00:18:37.967 ************************************ 00:18:37.967 01:48:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:37.967 * Looking for test storage... 00:18:37.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:18:37.967 01:48:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:37.967 01:48:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112241 00:18:37.967 01:48:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.967 01:48:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112241 00:18:37.967 01:48:37 -- common/autotest_common.sh@817 -- # '[' -z 112241 ']' 00:18:37.967 01:48:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.967 01:48:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.967 01:48:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.967 01:48:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.967 01:48:37 -- common/autotest_common.sh@10 -- # set +x 00:18:37.967 [2024-04-24 01:48:38.046436] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:37.967 [2024-04-24 01:48:38.046694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112241 ] 00:18:38.225 [2024-04-24 01:48:38.235880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.485 [2024-04-24 01:48:38.517173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.421 01:48:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.421 01:48:39 -- common/autotest_common.sh@850 -- # return 0 00:18:39.421 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:39.421 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:39.421 01:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.421 01:48:39 -- common/autotest_common.sh@10 -- # set +x 00:18:39.421 { 00:18:39.421 "filename": "/tmp/spdk_mem_dump.txt" 00:18:39.421 } 00:18:39.421 01:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.421 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:39.421 DPDK memory size 820.000000 MiB in 1 heap(s) 00:18:39.421 1 heaps totaling size 820.000000 MiB 00:18:39.421 size: 820.000000 MiB heap id: 0 00:18:39.421 end heaps---------- 00:18:39.421 8 mempools totaling size 598.116089 MiB 00:18:39.421 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:39.421 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:39.421 size: 84.521057 MiB name: bdev_io_112241 00:18:39.421 size: 51.011292 MiB name: evtpool_112241 00:18:39.421 size: 50.003479 MiB name: msgpool_112241 00:18:39.421 size: 21.763794 MiB name: PDU_Pool 00:18:39.421 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:39.421 size: 0.026123 MiB name: Session_Pool 00:18:39.421 end mempools------- 00:18:39.421 6 memzones totaling size 4.142822 MiB 00:18:39.421 size: 1.000366 MiB name: RG_ring_0_112241 00:18:39.421 size: 1.000366 MiB name: RG_ring_1_112241 00:18:39.421 size: 1.000366 MiB name: RG_ring_4_112241 00:18:39.421 size: 1.000366 MiB name: RG_ring_5_112241 00:18:39.421 size: 0.125366 MiB name: RG_ring_2_112241 00:18:39.421 size: 0.015991 MiB name: RG_ring_3_112241 00:18:39.421 end memzones------- 00:18:39.421 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:18:39.681 heap id: 0 total size: 820.000000 MiB number of busy elements: 225 number of free elements: 18 00:18:39.681 list of free elements. size: 18.468018 MiB 00:18:39.681 element at address: 0x200000400000 with size: 1.999451 MiB 00:18:39.681 element at address: 0x200000800000 with size: 1.996887 MiB 00:18:39.681 element at address: 0x200007000000 with size: 1.995972 MiB 00:18:39.681 element at address: 0x20000b200000 with size: 1.995972 MiB 00:18:39.681 element at address: 0x200019100040 with size: 0.999939 MiB 00:18:39.681 element at address: 0x200019500040 with size: 0.999939 MiB 00:18:39.681 element at address: 0x200019600000 with size: 0.999329 MiB 00:18:39.681 element at address: 0x200003e00000 with size: 0.996094 MiB 00:18:39.681 element at address: 0x200032200000 with size: 0.994324 MiB 00:18:39.681 element at address: 0x200018e00000 with size: 0.959656 MiB 00:18:39.681 element at address: 0x200019900040 with size: 0.937256 MiB 00:18:39.681 element at address: 0x200000200000 with size: 0.834106 MiB 00:18:39.681 element at address: 0x20001b000000 with size: 0.562439 MiB 00:18:39.681 element at address: 0x200019200000 with size: 0.489197 MiB 00:18:39.681 element at address: 0x200019a00000 with size: 0.485413 MiB 00:18:39.681 element at address: 0x200013800000 with size: 0.468140 MiB 00:18:39.681 element at address: 0x200028400000 with size: 0.399963 MiB 00:18:39.681 element at address: 0x200003a00000 with size: 0.353943 MiB 00:18:39.681 list of standard malloc elements. size: 199.267578 MiB 00:18:39.681 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:18:39.681 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:18:39.681 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:18:39.681 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:18:39.681 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:18:39.681 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:18:39.681 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:18:39.681 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:18:39.681 element at address: 0x200003aff180 with size: 0.002197 MiB 00:18:39.681 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:18:39.681 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:18:39.681 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:18:39.681 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200003aff080 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200003affa80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200003eff000 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013877d80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013877e80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013877f80 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878080 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878180 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878280 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878380 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878480 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200013878580 with size: 0.000244 MiB 00:18:39.681 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:18:39.681 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:18:39.681 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:18:39.682 element at address: 0x200019abc680 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:18:39.682 element at address: 0x200028466640 with size: 0.000244 MiB 00:18:39.682 element at address: 0x200028466740 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846d400 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846d680 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846d780 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846d880 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846d980 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846da80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846db80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846de80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846df80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e080 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e180 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e280 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e380 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e480 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e580 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e680 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e780 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e880 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846e980 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f080 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f180 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f280 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f380 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f480 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f580 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f680 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f780 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f880 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846f980 with size: 0.000244 MiB 00:18:39.682 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:18:39.683 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:18:39.683 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:18:39.683 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:18:39.683 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:18:39.683 list of memzone associated elements. size: 602.264404 MiB 00:18:39.683 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:18:39.683 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:39.683 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:18:39.683 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:39.683 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:18:39.683 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112241_0 00:18:39.683 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:18:39.683 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112241_0 00:18:39.683 element at address: 0x200003fff340 with size: 48.003113 MiB 00:18:39.683 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112241_0 00:18:39.683 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:18:39.683 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:39.683 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:18:39.683 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:39.683 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:18:39.683 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112241 00:18:39.683 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:18:39.683 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112241 00:18:39.683 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:18:39.683 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112241 00:18:39.683 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:18:39.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:39.683 element at address: 0x200019abc780 with size: 1.008179 MiB 00:18:39.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:39.683 element at address: 0x200018efde00 with size: 1.008179 MiB 00:18:39.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:39.683 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:18:39.683 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:39.683 element at address: 0x200003eff100 with size: 1.000549 MiB 00:18:39.683 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112241 00:18:39.683 element at address: 0x200003affb80 with size: 1.000549 MiB 00:18:39.683 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112241 00:18:39.683 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:18:39.683 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112241 00:18:39.683 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:18:39.683 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112241 00:18:39.683 element at address: 0x200003a5a9c0 with size: 0.500549 MiB 00:18:39.683 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112241 00:18:39.683 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:18:39.683 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:39.683 element at address: 0x200013878680 with size: 0.500549 MiB 00:18:39.683 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:39.683 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:18:39.683 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:39.683 element at address: 0x200003adee40 with size: 0.125549 MiB 00:18:39.683 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112241 00:18:39.683 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:18:39.683 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:39.683 element at address: 0x200028466840 with size: 0.023804 MiB 00:18:39.683 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:39.683 element at address: 0x200003adac00 with size: 0.016174 MiB 00:18:39.683 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112241 00:18:39.683 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:18:39.683 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:39.683 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:18:39.683 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112241 00:18:39.683 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:18:39.683 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112241 00:18:39.683 element at address: 0x20002846d500 with size: 0.000366 MiB 00:18:39.683 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:39.683 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:39.683 01:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112241 00:18:39.683 01:48:39 -- common/autotest_common.sh@936 -- # '[' -z 112241 ']' 00:18:39.683 01:48:39 -- common/autotest_common.sh@940 -- # kill -0 112241 00:18:39.683 01:48:39 -- common/autotest_common.sh@941 -- # uname 00:18:39.683 01:48:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.683 01:48:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112241 00:18:39.683 01:48:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:39.683 killing process with pid 112241 00:18:39.683 01:48:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:39.683 01:48:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112241' 00:18:39.683 01:48:39 -- common/autotest_common.sh@955 -- # kill 112241 00:18:39.683 01:48:39 -- common/autotest_common.sh@960 -- # wait 112241 00:18:42.219 ************************************ 00:18:42.219 END TEST dpdk_mem_utility 00:18:42.219 ************************************ 00:18:42.219 00:18:42.219 real 0m4.308s 00:18:42.219 user 0m4.324s 00:18:42.219 sys 0m0.585s 00:18:42.219 01:48:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:42.219 01:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.219 01:48:42 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:42.219 01:48:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:42.219 01:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.219 01:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.219 ************************************ 00:18:42.219 START TEST event 00:18:42.219 ************************************ 00:18:42.219 01:48:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:42.478 * Looking for test storage... 00:18:42.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:42.478 01:48:42 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:42.478 01:48:42 -- bdev/nbd_common.sh@6 -- # set -e 00:18:42.478 01:48:42 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:42.478 01:48:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:42.478 01:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.478 01:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.478 ************************************ 00:18:42.478 START TEST event_perf 00:18:42.478 ************************************ 00:18:42.478 01:48:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:42.478 Running I/O for 1 seconds...[2024-04-24 01:48:42.460630] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:42.478 [2024-04-24 01:48:42.460807] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112371 ] 00:18:42.737 [2024-04-24 01:48:42.656273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.995 [2024-04-24 01:48:42.899748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.995 [2024-04-24 01:48:42.899808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.995 [2024-04-24 01:48:42.900322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.995 [2024-04-24 01:48:42.900330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.369 Running I/O for 1 seconds... 00:18:44.369 lcore 0: 180934 00:18:44.369 lcore 1: 180933 00:18:44.369 lcore 2: 180934 00:18:44.369 lcore 3: 180934 00:18:44.369 done. 00:18:44.369 00:18:44.369 real 0m1.936s 00:18:44.369 user 0m4.703s 00:18:44.369 sys 0m0.128s 00:18:44.369 01:48:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:44.369 01:48:44 -- common/autotest_common.sh@10 -- # set +x 00:18:44.369 ************************************ 00:18:44.369 END TEST event_perf 00:18:44.369 ************************************ 00:18:44.369 01:48:44 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:44.369 01:48:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:44.369 01:48:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.369 01:48:44 -- common/autotest_common.sh@10 -- # set +x 00:18:44.369 ************************************ 00:18:44.369 START TEST event_reactor 00:18:44.369 ************************************ 00:18:44.370 01:48:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:44.628 [2024-04-24 01:48:44.487366] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:44.628 [2024-04-24 01:48:44.487507] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112428 ] 00:18:44.628 [2024-04-24 01:48:44.648079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.887 [2024-04-24 01:48:44.849217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.263 test_start 00:18:46.263 oneshot 00:18:46.263 tick 100 00:18:46.263 tick 100 00:18:46.263 tick 250 00:18:46.263 tick 100 00:18:46.263 tick 100 00:18:46.263 tick 100 00:18:46.263 tick 250 00:18:46.263 tick 500 00:18:46.263 tick 100 00:18:46.263 tick 100 00:18:46.263 tick 250 00:18:46.263 tick 100 00:18:46.263 tick 100 00:18:46.263 test_end 00:18:46.263 00:18:46.263 real 0m1.833s 00:18:46.263 user 0m1.621s 00:18:46.263 sys 0m0.112s 00:18:46.263 01:48:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.263 ************************************ 00:18:46.263 END TEST event_reactor 00:18:46.263 01:48:46 -- common/autotest_common.sh@10 -- # set +x 00:18:46.263 ************************************ 00:18:46.263 01:48:46 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:46.263 01:48:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:46.263 01:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.263 01:48:46 -- common/autotest_common.sh@10 -- # set +x 00:18:46.522 ************************************ 00:18:46.522 START TEST event_reactor_perf 00:18:46.522 ************************************ 00:18:46.522 01:48:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:46.522 [2024-04-24 01:48:46.432683] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:46.522 [2024-04-24 01:48:46.433223] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112482 ] 00:18:46.781 [2024-04-24 01:48:46.619130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.038 [2024-04-24 01:48:46.902583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.407 test_start 00:18:48.407 test_end 00:18:48.407 Performance: 376363 events per second 00:18:48.407 00:18:48.407 real 0m1.997s 00:18:48.407 user 0m1.776s 00:18:48.407 sys 0m0.120s 00:18:48.407 01:48:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.407 01:48:48 -- common/autotest_common.sh@10 -- # set +x 00:18:48.407 ************************************ 00:18:48.407 END TEST event_reactor_perf 00:18:48.407 ************************************ 00:18:48.407 01:48:48 -- event/event.sh@49 -- # uname -s 00:18:48.407 01:48:48 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:48.407 01:48:48 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:48.407 01:48:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:48.407 01:48:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.407 01:48:48 -- common/autotest_common.sh@10 -- # set +x 00:18:48.407 ************************************ 00:18:48.407 START TEST event_scheduler 00:18:48.407 ************************************ 00:18:48.407 01:48:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:48.664 * Looking for test storage... 00:18:48.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:18:48.664 01:48:48 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:48.664 01:48:48 -- scheduler/scheduler.sh@35 -- # scheduler_pid=112567 00:18:48.664 01:48:48 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:48.664 01:48:48 -- scheduler/scheduler.sh@37 -- # waitforlisten 112567 00:18:48.664 01:48:48 -- common/autotest_common.sh@817 -- # '[' -z 112567 ']' 00:18:48.664 01:48:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.664 01:48:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:48.664 01:48:48 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:48.664 01:48:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.664 01:48:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:48.664 01:48:48 -- common/autotest_common.sh@10 -- # set +x 00:18:48.664 [2024-04-24 01:48:48.673933] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:48.664 [2024-04-24 01:48:48.674136] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112567 ] 00:18:48.921 [2024-04-24 01:48:48.881092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.222 [2024-04-24 01:48:49.175938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.222 [2024-04-24 01:48:49.176013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.222 [2024-04-24 01:48:49.176202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.222 [2024-04-24 01:48:49.176463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.804 01:48:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:49.804 01:48:49 -- common/autotest_common.sh@850 -- # return 0 00:18:49.804 01:48:49 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:49.804 01:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.804 01:48:49 -- common/autotest_common.sh@10 -- # set +x 00:18:49.804 POWER: Env isn't set yet! 00:18:49.804 POWER: Attempting to initialise ACPI cpufreq power management... 00:18:49.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:49.804 POWER: Cannot set governor of lcore 0 to userspace 00:18:49.804 POWER: Attempting to initialise PSTAT power management... 00:18:49.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:49.804 POWER: Cannot set governor of lcore 0 to performance 00:18:49.804 POWER: Attempting to initialise AMD PSTATE power management... 00:18:49.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:49.804 POWER: Cannot set governor of lcore 0 to userspace 00:18:49.805 POWER: Attempting to initialise CPPC power management... 00:18:49.805 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:49.805 POWER: Cannot set governor of lcore 0 to userspace 00:18:49.805 POWER: Attempting to initialise VM power management... 00:18:49.805 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:18:49.805 POWER: Unable to set Power Management Environment for lcore 0 00:18:49.805 [2024-04-24 01:48:49.676146] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:18:49.805 [2024-04-24 01:48:49.676220] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:18:49.805 [2024-04-24 01:48:49.676266] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:18:49.805 01:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.805 01:48:49 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:49.805 01:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.805 01:48:49 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 [2024-04-24 01:48:50.048734] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:50.062 01:48:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:50.062 01:48:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 ************************************ 00:18:50.062 START TEST scheduler_create_thread 00:18:50.062 ************************************ 00:18:50.062 01:48:50 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 2 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 3 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 4 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 5 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.062 6 00:18:50.062 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.062 01:48:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:50.062 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.062 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 7 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 8 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 9 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 10 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:50.320 01:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.320 01:48:50 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:50.320 01:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.320 01:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:51.341 01:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.341 01:48:51 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:51.341 01:48:51 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:51.341 01:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.341 01:48:51 -- common/autotest_common.sh@10 -- # set +x 00:18:52.275 01:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.275 00:18:52.275 real 0m2.167s 00:18:52.275 user 0m0.013s 00:18:52.275 sys 0m0.006s 00:18:52.275 01:48:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.275 01:48:52 -- common/autotest_common.sh@10 -- # set +x 00:18:52.275 ************************************ 00:18:52.275 END TEST scheduler_create_thread 00:18:52.275 ************************************ 00:18:52.275 01:48:52 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:52.275 01:48:52 -- scheduler/scheduler.sh@46 -- # killprocess 112567 00:18:52.275 01:48:52 -- common/autotest_common.sh@936 -- # '[' -z 112567 ']' 00:18:52.275 01:48:52 -- common/autotest_common.sh@940 -- # kill -0 112567 00:18:52.275 01:48:52 -- common/autotest_common.sh@941 -- # uname 00:18:52.275 01:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.275 01:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112567 00:18:52.275 01:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:52.275 01:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:52.275 01:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112567' 00:18:52.275 killing process with pid 112567 00:18:52.275 01:48:52 -- common/autotest_common.sh@955 -- # kill 112567 00:18:52.275 01:48:52 -- common/autotest_common.sh@960 -- # wait 112567 00:18:52.841 [2024-04-24 01:48:52.746000] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:54.219 00:18:54.219 real 0m5.639s 00:18:54.219 user 0m9.737s 00:18:54.219 sys 0m0.512s 00:18:54.219 01:48:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.219 01:48:54 -- common/autotest_common.sh@10 -- # set +x 00:18:54.219 ************************************ 00:18:54.219 END TEST event_scheduler 00:18:54.219 ************************************ 00:18:54.219 01:48:54 -- event/event.sh@51 -- # modprobe -n nbd 00:18:54.219 01:48:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:54.219 01:48:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:54.219 01:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.219 01:48:54 -- common/autotest_common.sh@10 -- # set +x 00:18:54.219 ************************************ 00:18:54.219 START TEST app_repeat 00:18:54.219 ************************************ 00:18:54.219 01:48:54 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:18:54.219 01:48:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:54.219 01:48:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:54.219 01:48:54 -- event/event.sh@13 -- # local nbd_list 00:18:54.219 01:48:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:54.219 01:48:54 -- event/event.sh@14 -- # local bdev_list 00:18:54.219 01:48:54 -- event/event.sh@15 -- # local repeat_times=4 00:18:54.219 01:48:54 -- event/event.sh@17 -- # modprobe nbd 00:18:54.219 01:48:54 -- event/event.sh@19 -- # repeat_pid=112698 00:18:54.219 01:48:54 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:54.219 01:48:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:54.219 Process app_repeat pid: 112698 00:18:54.219 01:48:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112698' 00:18:54.219 01:48:54 -- event/event.sh@23 -- # for i in {0..2} 00:18:54.219 spdk_app_start Round 0 00:18:54.219 01:48:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:54.219 01:48:54 -- event/event.sh@25 -- # waitforlisten 112698 /var/tmp/spdk-nbd.sock 00:18:54.219 01:48:54 -- common/autotest_common.sh@817 -- # '[' -z 112698 ']' 00:18:54.219 01:48:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:54.219 01:48:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:54.219 01:48:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:54.219 01:48:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.219 01:48:54 -- common/autotest_common.sh@10 -- # set +x 00:18:54.478 [2024-04-24 01:48:54.310603] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:18:54.478 [2024-04-24 01:48:54.310914] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112698 ] 00:18:54.478 [2024-04-24 01:48:54.521295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.738 [2024-04-24 01:48:54.746482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.738 [2024-04-24 01:48:54.746498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.305 01:48:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.305 01:48:55 -- common/autotest_common.sh@850 -- # return 0 00:18:55.305 01:48:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:55.563 Malloc0 00:18:55.564 01:48:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:55.823 Malloc1 00:18:55.823 01:48:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@12 -- # local i 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.823 01:48:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:56.081 /dev/nbd0 00:18:56.081 01:48:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:56.081 01:48:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:56.081 01:48:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:18:56.081 01:48:56 -- common/autotest_common.sh@855 -- # local i 00:18:56.081 01:48:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:18:56.081 01:48:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:18:56.081 01:48:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:18:56.081 01:48:56 -- common/autotest_common.sh@859 -- # break 00:18:56.081 01:48:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:18:56.081 01:48:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:18:56.081 01:48:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:56.081 1+0 records in 00:18:56.081 1+0 records out 00:18:56.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236681 s, 17.3 MB/s 00:18:56.081 01:48:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:56.081 01:48:56 -- common/autotest_common.sh@872 -- # size=4096 00:18:56.081 01:48:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:56.081 01:48:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:18:56.081 01:48:56 -- common/autotest_common.sh@875 -- # return 0 00:18:56.081 01:48:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.081 01:48:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.081 01:48:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:56.339 /dev/nbd1 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:56.339 01:48:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:18:56.339 01:48:56 -- common/autotest_common.sh@855 -- # local i 00:18:56.339 01:48:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:18:56.339 01:48:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:18:56.339 01:48:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:18:56.339 01:48:56 -- common/autotest_common.sh@859 -- # break 00:18:56.339 01:48:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:18:56.339 01:48:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:18:56.339 01:48:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:56.339 1+0 records in 00:18:56.339 1+0 records out 00:18:56.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346971 s, 11.8 MB/s 00:18:56.339 01:48:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:56.339 01:48:56 -- common/autotest_common.sh@872 -- # size=4096 00:18:56.339 01:48:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:56.339 01:48:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:18:56.339 01:48:56 -- common/autotest_common.sh@875 -- # return 0 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:56.339 01:48:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:56.597 01:48:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:56.597 { 00:18:56.597 "nbd_device": "/dev/nbd0", 00:18:56.597 "bdev_name": "Malloc0" 00:18:56.597 }, 00:18:56.597 { 00:18:56.597 "nbd_device": "/dev/nbd1", 00:18:56.597 "bdev_name": "Malloc1" 00:18:56.597 } 00:18:56.597 ]' 00:18:56.597 01:48:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:56.597 { 00:18:56.597 "nbd_device": "/dev/nbd0", 00:18:56.597 "bdev_name": "Malloc0" 00:18:56.597 }, 00:18:56.597 { 00:18:56.597 "nbd_device": "/dev/nbd1", 00:18:56.597 "bdev_name": "Malloc1" 00:18:56.597 } 00:18:56.597 ]' 00:18:56.597 01:48:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:56.855 /dev/nbd1' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:56.855 /dev/nbd1' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@65 -- # count=2 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@95 -- # count=2 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:56.855 256+0 records in 00:18:56.855 256+0 records out 00:18:56.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011816 s, 88.7 MB/s 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:56.855 256+0 records in 00:18:56.855 256+0 records out 00:18:56.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276901 s, 37.9 MB/s 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:56.855 256+0 records in 00:18:56.855 256+0 records out 00:18:56.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286754 s, 36.6 MB/s 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@51 -- # local i 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.855 01:48:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@41 -- # break 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@45 -- # return 0 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:57.113 01:48:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:57.371 01:48:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:57.371 01:48:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:57.371 01:48:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:57.371 01:48:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:57.371 01:48:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@41 -- # break 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@45 -- # return 0 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.372 01:48:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:57.630 01:48:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:57.630 01:48:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:57.630 01:48:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@65 -- # true 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@65 -- # count=0 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@104 -- # count=0 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:57.889 01:48:57 -- bdev/nbd_common.sh@109 -- # return 0 00:18:57.889 01:48:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:58.148 01:48:58 -- event/event.sh@35 -- # sleep 3 00:19:00.052 [2024-04-24 01:48:59.638656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:00.052 [2024-04-24 01:48:59.842951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.052 [2024-04-24 01:48:59.842957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.052 [2024-04-24 01:49:00.054227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:00.052 [2024-04-24 01:49:00.054377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:01.434 01:49:01 -- event/event.sh@23 -- # for i in {0..2} 00:19:01.434 spdk_app_start Round 1 00:19:01.434 01:49:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:19:01.434 01:49:01 -- event/event.sh@25 -- # waitforlisten 112698 /var/tmp/spdk-nbd.sock 00:19:01.434 01:49:01 -- common/autotest_common.sh@817 -- # '[' -z 112698 ']' 00:19:01.434 01:49:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:01.434 01:49:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:01.434 01:49:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:01.434 01:49:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.434 01:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:01.434 01:49:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.434 01:49:01 -- common/autotest_common.sh@850 -- # return 0 00:19:01.434 01:49:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:02.001 Malloc0 00:19:02.001 01:49:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:02.261 Malloc1 00:19:02.261 01:49:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@12 -- # local i 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.261 01:49:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:02.521 /dev/nbd0 00:19:02.521 01:49:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.521 01:49:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.521 01:49:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:19:02.521 01:49:02 -- common/autotest_common.sh@855 -- # local i 00:19:02.521 01:49:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:02.521 01:49:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:02.521 01:49:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:19:02.521 01:49:02 -- common/autotest_common.sh@859 -- # break 00:19:02.521 01:49:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:02.521 01:49:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:02.521 01:49:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:02.521 1+0 records in 00:19:02.521 1+0 records out 00:19:02.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369348 s, 11.1 MB/s 00:19:02.521 01:49:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.521 01:49:02 -- common/autotest_common.sh@872 -- # size=4096 00:19:02.521 01:49:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.521 01:49:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:02.521 01:49:02 -- common/autotest_common.sh@875 -- # return 0 00:19:02.521 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.521 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.521 01:49:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:02.781 /dev/nbd1 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.781 01:49:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:19:02.781 01:49:02 -- common/autotest_common.sh@855 -- # local i 00:19:02.781 01:49:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:02.781 01:49:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:02.781 01:49:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:19:02.781 01:49:02 -- common/autotest_common.sh@859 -- # break 00:19:02.781 01:49:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:02.781 01:49:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:02.781 01:49:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:02.781 1+0 records in 00:19:02.781 1+0 records out 00:19:02.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286407 s, 14.3 MB/s 00:19:02.781 01:49:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.781 01:49:02 -- common/autotest_common.sh@872 -- # size=4096 00:19:02.781 01:49:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.781 01:49:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:02.781 01:49:02 -- common/autotest_common.sh@875 -- # return 0 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.781 01:49:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:03.040 01:49:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:03.040 { 00:19:03.040 "nbd_device": "/dev/nbd0", 00:19:03.040 "bdev_name": "Malloc0" 00:19:03.040 }, 00:19:03.040 { 00:19:03.040 "nbd_device": "/dev/nbd1", 00:19:03.040 "bdev_name": "Malloc1" 00:19:03.040 } 00:19:03.040 ]' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:03.040 { 00:19:03.040 "nbd_device": "/dev/nbd0", 00:19:03.040 "bdev_name": "Malloc0" 00:19:03.040 }, 00:19:03.040 { 00:19:03.040 "nbd_device": "/dev/nbd1", 00:19:03.040 "bdev_name": "Malloc1" 00:19:03.040 } 00:19:03.040 ]' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:03.040 /dev/nbd1' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:03.040 /dev/nbd1' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@65 -- # count=2 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@95 -- # count=2 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:03.040 256+0 records in 00:19:03.040 256+0 records out 00:19:03.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00898697 s, 117 MB/s 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:03.040 256+0 records in 00:19:03.040 256+0 records out 00:19:03.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296969 s, 35.3 MB/s 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:03.040 01:49:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:03.298 256+0 records in 00:19:03.298 256+0 records out 00:19:03.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280363 s, 37.4 MB/s 00:19:03.298 01:49:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:03.298 01:49:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.298 01:49:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:03.298 01:49:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:03.298 01:49:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@51 -- # local i 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.299 01:49:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@41 -- # break 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.558 01:49:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@41 -- # break 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.816 01:49:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@65 -- # true 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@65 -- # count=0 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@104 -- # count=0 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:04.075 01:49:03 -- bdev/nbd_common.sh@109 -- # return 0 00:19:04.075 01:49:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:04.643 01:49:04 -- event/event.sh@35 -- # sleep 3 00:19:06.081 [2024-04-24 01:49:05.843082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:06.081 [2024-04-24 01:49:06.051036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.081 [2024-04-24 01:49:06.051037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.339 [2024-04-24 01:49:06.261002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:06.339 [2024-04-24 01:49:06.261300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:07.713 spdk_app_start Round 2 00:19:07.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.713 01:49:07 -- event/event.sh@23 -- # for i in {0..2} 00:19:07.713 01:49:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:19:07.713 01:49:07 -- event/event.sh@25 -- # waitforlisten 112698 /var/tmp/spdk-nbd.sock 00:19:07.713 01:49:07 -- common/autotest_common.sh@817 -- # '[' -z 112698 ']' 00:19:07.713 01:49:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.713 01:49:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.713 01:49:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.713 01:49:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.713 01:49:07 -- common/autotest_common.sh@10 -- # set +x 00:19:07.713 01:49:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.713 01:49:07 -- common/autotest_common.sh@850 -- # return 0 00:19:07.713 01:49:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:07.973 Malloc0 00:19:07.973 01:49:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:08.231 Malloc1 00:19:08.488 01:49:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@12 -- # local i 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:08.488 /dev/nbd0 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.488 01:49:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:19:08.488 01:49:08 -- common/autotest_common.sh@855 -- # local i 00:19:08.488 01:49:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:08.488 01:49:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:08.488 01:49:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:19:08.488 01:49:08 -- common/autotest_common.sh@859 -- # break 00:19:08.488 01:49:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:08.488 01:49:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:08.488 01:49:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:08.488 1+0 records in 00:19:08.488 1+0 records out 00:19:08.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338519 s, 12.1 MB/s 00:19:08.488 01:49:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.488 01:49:08 -- common/autotest_common.sh@872 -- # size=4096 00:19:08.488 01:49:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.488 01:49:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:08.488 01:49:08 -- common/autotest_common.sh@875 -- # return 0 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.488 01:49:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:08.745 /dev/nbd1 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:08.745 01:49:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:19:08.745 01:49:08 -- common/autotest_common.sh@855 -- # local i 00:19:08.745 01:49:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:08.745 01:49:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:08.745 01:49:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:19:08.745 01:49:08 -- common/autotest_common.sh@859 -- # break 00:19:08.745 01:49:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:08.745 01:49:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:08.745 01:49:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:08.745 1+0 records in 00:19:08.745 1+0 records out 00:19:08.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341517 s, 12.0 MB/s 00:19:08.745 01:49:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.745 01:49:08 -- common/autotest_common.sh@872 -- # size=4096 00:19:08.745 01:49:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.745 01:49:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:08.745 01:49:08 -- common/autotest_common.sh@875 -- # return 0 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.745 01:49:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.002 01:49:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:09.002 { 00:19:09.002 "nbd_device": "/dev/nbd0", 00:19:09.002 "bdev_name": "Malloc0" 00:19:09.002 }, 00:19:09.002 { 00:19:09.002 "nbd_device": "/dev/nbd1", 00:19:09.002 "bdev_name": "Malloc1" 00:19:09.002 } 00:19:09.002 ]' 00:19:09.002 01:49:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:09.002 { 00:19:09.002 "nbd_device": "/dev/nbd0", 00:19:09.002 "bdev_name": "Malloc0" 00:19:09.002 }, 00:19:09.002 { 00:19:09.002 "nbd_device": "/dev/nbd1", 00:19:09.002 "bdev_name": "Malloc1" 00:19:09.002 } 00:19:09.002 ]' 00:19:09.002 01:49:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.002 01:49:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:09.002 /dev/nbd1' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:09.261 /dev/nbd1' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@65 -- # count=2 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@95 -- # count=2 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:09.261 256+0 records in 00:19:09.261 256+0 records out 00:19:09.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100985 s, 104 MB/s 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:09.261 256+0 records in 00:19:09.261 256+0 records out 00:19:09.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244924 s, 42.8 MB/s 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:09.261 256+0 records in 00:19:09.261 256+0 records out 00:19:09.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296157 s, 35.4 MB/s 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@51 -- # local i 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.261 01:49:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@41 -- # break 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.520 01:49:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@41 -- # break 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.779 01:49:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.039 01:49:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.039 01:49:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.039 01:49:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@65 -- # true 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@104 -- # count=0 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:10.039 01:49:10 -- bdev/nbd_common.sh@109 -- # return 0 00:19:10.040 01:49:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:10.606 01:49:10 -- event/event.sh@35 -- # sleep 3 00:19:12.012 [2024-04-24 01:49:11.853318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:12.012 [2024-04-24 01:49:12.055851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.012 [2024-04-24 01:49:12.055852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.271 [2024-04-24 01:49:12.258459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:12.271 [2024-04-24 01:49:12.258810] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:13.649 01:49:13 -- event/event.sh@38 -- # waitforlisten 112698 /var/tmp/spdk-nbd.sock 00:19:13.649 01:49:13 -- common/autotest_common.sh@817 -- # '[' -z 112698 ']' 00:19:13.649 01:49:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:13.649 01:49:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.649 01:49:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:13.649 01:49:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.649 01:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:13.649 01:49:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.649 01:49:13 -- common/autotest_common.sh@850 -- # return 0 00:19:13.649 01:49:13 -- event/event.sh@39 -- # killprocess 112698 00:19:13.649 01:49:13 -- common/autotest_common.sh@936 -- # '[' -z 112698 ']' 00:19:13.649 01:49:13 -- common/autotest_common.sh@940 -- # kill -0 112698 00:19:13.649 01:49:13 -- common/autotest_common.sh@941 -- # uname 00:19:13.649 01:49:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.649 01:49:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112698 00:19:13.649 01:49:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:13.650 01:49:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:13.650 01:49:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112698' 00:19:13.650 killing process with pid 112698 00:19:13.650 01:49:13 -- common/autotest_common.sh@955 -- # kill 112698 00:19:13.650 01:49:13 -- common/autotest_common.sh@960 -- # wait 112698 00:19:15.029 spdk_app_start is called in Round 0. 00:19:15.029 Shutdown signal received, stop current app iteration 00:19:15.029 Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 reinitialization... 00:19:15.029 spdk_app_start is called in Round 1. 00:19:15.029 Shutdown signal received, stop current app iteration 00:19:15.029 Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 reinitialization... 00:19:15.029 spdk_app_start is called in Round 2. 00:19:15.029 Shutdown signal received, stop current app iteration 00:19:15.029 Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 reinitialization... 00:19:15.029 spdk_app_start is called in Round 3. 00:19:15.029 Shutdown signal received, stop current app iteration 00:19:15.029 ************************************ 00:19:15.029 END TEST app_repeat 00:19:15.029 ************************************ 00:19:15.029 01:49:14 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:19:15.029 01:49:14 -- event/event.sh@42 -- # return 0 00:19:15.029 00:19:15.029 real 0m20.755s 00:19:15.029 user 0m43.684s 00:19:15.029 sys 0m3.340s 00:19:15.029 01:49:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:15.029 01:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:15.029 01:49:15 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:19:15.029 01:49:15 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:15.029 01:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.029 01:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.029 01:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:15.029 ************************************ 00:19:15.029 START TEST cpu_locks 00:19:15.029 ************************************ 00:19:15.029 01:49:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:15.288 * Looking for test storage... 00:19:15.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:15.288 01:49:15 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:19:15.288 01:49:15 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:19:15.288 01:49:15 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:19:15.288 01:49:15 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:19:15.288 01:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.288 01:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.288 01:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:15.288 ************************************ 00:19:15.288 START TEST default_locks 00:19:15.288 ************************************ 00:19:15.288 01:49:15 -- common/autotest_common.sh@1111 -- # default_locks 00:19:15.288 01:49:15 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113243 00:19:15.288 01:49:15 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:15.288 01:49:15 -- event/cpu_locks.sh@47 -- # waitforlisten 113243 00:19:15.288 01:49:15 -- common/autotest_common.sh@817 -- # '[' -z 113243 ']' 00:19:15.288 01:49:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.288 01:49:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.288 01:49:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.288 01:49:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.288 01:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:15.288 [2024-04-24 01:49:15.316493] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:15.288 [2024-04-24 01:49:15.316829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113243 ] 00:19:15.547 [2024-04-24 01:49:15.478367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.805 [2024-04-24 01:49:15.692541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.741 01:49:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.741 01:49:16 -- common/autotest_common.sh@850 -- # return 0 00:19:16.741 01:49:16 -- event/cpu_locks.sh@49 -- # locks_exist 113243 00:19:16.741 01:49:16 -- event/cpu_locks.sh@22 -- # lslocks -p 113243 00:19:16.741 01:49:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:17.000 01:49:16 -- event/cpu_locks.sh@50 -- # killprocess 113243 00:19:17.000 01:49:16 -- common/autotest_common.sh@936 -- # '[' -z 113243 ']' 00:19:17.000 01:49:16 -- common/autotest_common.sh@940 -- # kill -0 113243 00:19:17.000 01:49:16 -- common/autotest_common.sh@941 -- # uname 00:19:17.000 01:49:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.000 01:49:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113243 00:19:17.000 killing process with pid 113243 00:19:17.000 01:49:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:17.000 01:49:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:17.000 01:49:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113243' 00:19:17.000 01:49:16 -- common/autotest_common.sh@955 -- # kill 113243 00:19:17.000 01:49:16 -- common/autotest_common.sh@960 -- # wait 113243 00:19:19.535 01:49:19 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113243 00:19:19.535 01:49:19 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.535 01:49:19 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113243 00:19:19.535 01:49:19 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:19:19.535 01:49:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.535 01:49:19 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:19:19.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.535 ERROR: process (pid: 113243) is no longer running 00:19:19.535 01:49:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.535 01:49:19 -- common/autotest_common.sh@641 -- # waitforlisten 113243 00:19:19.535 01:49:19 -- common/autotest_common.sh@817 -- # '[' -z 113243 ']' 00:19:19.535 01:49:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.535 01:49:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.535 01:49:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.535 01:49:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.535 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:19:19.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113243) - No such process 00:19:19.535 01:49:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:19.535 01:49:19 -- common/autotest_common.sh@850 -- # return 1 00:19:19.535 01:49:19 -- common/autotest_common.sh@641 -- # es=1 00:19:19.535 01:49:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.535 01:49:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.535 01:49:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.535 ************************************ 00:19:19.535 END TEST default_locks 00:19:19.535 ************************************ 00:19:19.535 01:49:19 -- event/cpu_locks.sh@54 -- # no_locks 00:19:19.535 01:49:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:19.535 01:49:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:19:19.535 01:49:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:19.535 00:19:19.535 real 0m4.129s 00:19:19.535 user 0m4.249s 00:19:19.535 sys 0m0.637s 00:19:19.535 01:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.535 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:19:19.535 01:49:19 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:19:19.535 01:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:19.535 01:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.535 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:19:19.535 ************************************ 00:19:19.535 START TEST default_locks_via_rpc 00:19:19.535 ************************************ 00:19:19.535 01:49:19 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:19:19.535 01:49:19 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113334 00:19:19.535 01:49:19 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:19.535 01:49:19 -- event/cpu_locks.sh@63 -- # waitforlisten 113334 00:19:19.535 01:49:19 -- common/autotest_common.sh@817 -- # '[' -z 113334 ']' 00:19:19.535 01:49:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.535 01:49:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.535 01:49:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.535 01:49:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.535 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:19:19.535 [2024-04-24 01:49:19.546642] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:19.535 [2024-04-24 01:49:19.547020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113334 ] 00:19:19.794 [2024-04-24 01:49:19.703355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.052 [2024-04-24 01:49:19.914005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.995 01:49:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.995 01:49:20 -- common/autotest_common.sh@850 -- # return 0 00:19:20.995 01:49:20 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:19:20.995 01:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.995 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:19:20.995 01:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.995 01:49:20 -- event/cpu_locks.sh@67 -- # no_locks 00:19:20.995 01:49:20 -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:20.995 01:49:20 -- event/cpu_locks.sh@26 -- # local lock_files 00:19:20.995 01:49:20 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:20.995 01:49:20 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:19:20.995 01:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.995 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:19:20.995 01:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.995 01:49:20 -- event/cpu_locks.sh@71 -- # locks_exist 113334 00:19:20.995 01:49:20 -- event/cpu_locks.sh@22 -- # lslocks -p 113334 00:19:20.995 01:49:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:21.253 01:49:21 -- event/cpu_locks.sh@73 -- # killprocess 113334 00:19:21.253 01:49:21 -- common/autotest_common.sh@936 -- # '[' -z 113334 ']' 00:19:21.253 01:49:21 -- common/autotest_common.sh@940 -- # kill -0 113334 00:19:21.253 01:49:21 -- common/autotest_common.sh@941 -- # uname 00:19:21.253 01:49:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.253 01:49:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113334 00:19:21.253 killing process with pid 113334 00:19:21.253 01:49:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.253 01:49:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.253 01:49:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113334' 00:19:21.253 01:49:21 -- common/autotest_common.sh@955 -- # kill 113334 00:19:21.253 01:49:21 -- common/autotest_common.sh@960 -- # wait 113334 00:19:23.783 00:19:23.783 real 0m4.210s 00:19:23.783 user 0m4.148s 00:19:23.783 sys 0m0.679s 00:19:23.783 ************************************ 00:19:23.783 END TEST default_locks_via_rpc 00:19:23.783 ************************************ 00:19:23.783 01:49:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:23.783 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:19:23.783 01:49:23 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:23.783 01:49:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:23.783 01:49:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.783 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:19:23.783 ************************************ 00:19:23.783 START TEST non_locking_app_on_locked_coremask 00:19:23.783 ************************************ 00:19:23.783 01:49:23 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:19:23.783 01:49:23 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113422 00:19:23.783 01:49:23 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:23.783 01:49:23 -- event/cpu_locks.sh@81 -- # waitforlisten 113422 /var/tmp/spdk.sock 00:19:23.783 01:49:23 -- common/autotest_common.sh@817 -- # '[' -z 113422 ']' 00:19:23.783 01:49:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.783 01:49:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.783 01:49:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.783 01:49:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.783 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:19:24.041 [2024-04-24 01:49:23.872928] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:24.041 [2024-04-24 01:49:23.873376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113422 ] 00:19:24.041 [2024-04-24 01:49:24.053923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.299 [2024-04-24 01:49:24.300619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:25.233 01:49:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:25.233 01:49:25 -- common/autotest_common.sh@850 -- # return 0 00:19:25.233 01:49:25 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113450 00:19:25.233 01:49:25 -- event/cpu_locks.sh@85 -- # waitforlisten 113450 /var/tmp/spdk2.sock 00:19:25.233 01:49:25 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:25.233 01:49:25 -- common/autotest_common.sh@817 -- # '[' -z 113450 ']' 00:19:25.233 01:49:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:25.233 01:49:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:25.233 01:49:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:25.233 01:49:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:25.233 01:49:25 -- common/autotest_common.sh@10 -- # set +x 00:19:25.233 [2024-04-24 01:49:25.220715] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:25.233 [2024-04-24 01:49:25.221160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113450 ] 00:19:25.491 [2024-04-24 01:49:25.394525] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:25.491 [2024-04-24 01:49:25.394623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.750 [2024-04-24 01:49:25.828380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.280 01:49:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.280 01:49:27 -- common/autotest_common.sh@850 -- # return 0 00:19:28.280 01:49:27 -- event/cpu_locks.sh@87 -- # locks_exist 113422 00:19:28.280 01:49:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:28.280 01:49:27 -- event/cpu_locks.sh@22 -- # lslocks -p 113422 00:19:28.538 01:49:28 -- event/cpu_locks.sh@89 -- # killprocess 113422 00:19:28.538 01:49:28 -- common/autotest_common.sh@936 -- # '[' -z 113422 ']' 00:19:28.538 01:49:28 -- common/autotest_common.sh@940 -- # kill -0 113422 00:19:28.538 01:49:28 -- common/autotest_common.sh@941 -- # uname 00:19:28.538 01:49:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:28.539 01:49:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113422 00:19:28.539 killing process with pid 113422 00:19:28.539 01:49:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:28.539 01:49:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:28.539 01:49:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113422' 00:19:28.539 01:49:28 -- common/autotest_common.sh@955 -- # kill 113422 00:19:28.539 01:49:28 -- common/autotest_common.sh@960 -- # wait 113422 00:19:33.804 01:49:33 -- event/cpu_locks.sh@90 -- # killprocess 113450 00:19:33.804 01:49:33 -- common/autotest_common.sh@936 -- # '[' -z 113450 ']' 00:19:33.804 01:49:33 -- common/autotest_common.sh@940 -- # kill -0 113450 00:19:33.804 01:49:33 -- common/autotest_common.sh@941 -- # uname 00:19:33.804 01:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.804 01:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113450 00:19:33.804 01:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:33.804 01:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:33.804 01:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113450' 00:19:33.804 killing process with pid 113450 00:19:33.804 01:49:33 -- common/autotest_common.sh@955 -- # kill 113450 00:19:33.804 01:49:33 -- common/autotest_common.sh@960 -- # wait 113450 00:19:37.160 ************************************ 00:19:37.160 END TEST non_locking_app_on_locked_coremask 00:19:37.160 ************************************ 00:19:37.160 00:19:37.160 real 0m12.753s 00:19:37.160 user 0m13.276s 00:19:37.160 sys 0m1.450s 00:19:37.160 01:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:37.160 01:49:36 -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 01:49:36 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:37.160 01:49:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:37.160 01:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.160 01:49:36 -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 ************************************ 00:19:37.160 START TEST locking_app_on_unlocked_coremask 00:19:37.160 ************************************ 00:19:37.160 01:49:36 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:19:37.160 01:49:36 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=113623 00:19:37.160 01:49:36 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:37.160 01:49:36 -- event/cpu_locks.sh@99 -- # waitforlisten 113623 /var/tmp/spdk.sock 00:19:37.160 01:49:36 -- common/autotest_common.sh@817 -- # '[' -z 113623 ']' 00:19:37.160 01:49:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.160 01:49:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.160 01:49:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.160 01:49:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.160 01:49:36 -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 [2024-04-24 01:49:36.708442] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:37.160 [2024-04-24 01:49:36.708784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113623 ] 00:19:37.160 [2024-04-24 01:49:36.871172] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:37.160 [2024-04-24 01:49:36.871456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.160 [2024-04-24 01:49:37.088623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:38.093 01:49:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.093 01:49:37 -- common/autotest_common.sh@850 -- # return 0 00:19:38.093 01:49:37 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=113649 00:19:38.093 01:49:37 -- event/cpu_locks.sh@103 -- # waitforlisten 113649 /var/tmp/spdk2.sock 00:19:38.093 01:49:37 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:38.093 01:49:37 -- common/autotest_common.sh@817 -- # '[' -z 113649 ']' 00:19:38.093 01:49:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:38.093 01:49:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.093 01:49:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:38.093 01:49:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.093 01:49:37 -- common/autotest_common.sh@10 -- # set +x 00:19:38.093 [2024-04-24 01:49:38.027386] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:38.093 [2024-04-24 01:49:38.027799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113649 ] 00:19:38.351 [2024-04-24 01:49:38.201382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.609 [2024-04-24 01:49:38.629333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.138 01:49:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.138 01:49:40 -- common/autotest_common.sh@850 -- # return 0 00:19:41.138 01:49:40 -- event/cpu_locks.sh@105 -- # locks_exist 113649 00:19:41.138 01:49:40 -- event/cpu_locks.sh@22 -- # lslocks -p 113649 00:19:41.138 01:49:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:41.396 01:49:41 -- event/cpu_locks.sh@107 -- # killprocess 113623 00:19:41.396 01:49:41 -- common/autotest_common.sh@936 -- # '[' -z 113623 ']' 00:19:41.396 01:49:41 -- common/autotest_common.sh@940 -- # kill -0 113623 00:19:41.396 01:49:41 -- common/autotest_common.sh@941 -- # uname 00:19:41.396 01:49:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.396 01:49:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113623 00:19:41.396 killing process with pid 113623 00:19:41.396 01:49:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:41.396 01:49:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:41.397 01:49:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113623' 00:19:41.397 01:49:41 -- common/autotest_common.sh@955 -- # kill 113623 00:19:41.397 01:49:41 -- common/autotest_common.sh@960 -- # wait 113623 00:19:46.658 01:49:46 -- event/cpu_locks.sh@108 -- # killprocess 113649 00:19:46.658 01:49:46 -- common/autotest_common.sh@936 -- # '[' -z 113649 ']' 00:19:46.658 01:49:46 -- common/autotest_common.sh@940 -- # kill -0 113649 00:19:46.658 01:49:46 -- common/autotest_common.sh@941 -- # uname 00:19:46.658 01:49:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.658 01:49:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113649 00:19:46.658 killing process with pid 113649 00:19:46.659 01:49:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.659 01:49:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.659 01:49:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113649' 00:19:46.659 01:49:46 -- common/autotest_common.sh@955 -- # kill 113649 00:19:46.659 01:49:46 -- common/autotest_common.sh@960 -- # wait 113649 00:19:49.942 ************************************ 00:19:49.942 END TEST locking_app_on_unlocked_coremask 00:19:49.942 ************************************ 00:19:49.942 00:19:49.942 real 0m12.684s 00:19:49.942 user 0m13.173s 00:19:49.942 sys 0m1.383s 00:19:49.942 01:49:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:49.942 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 01:49:49 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:19:49.942 01:49:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:49.942 01:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:49.942 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 ************************************ 00:19:49.942 START TEST locking_app_on_locked_coremask 00:19:49.942 ************************************ 00:19:49.942 01:49:49 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:19:49.942 01:49:49 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=113825 00:19:49.942 01:49:49 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:49.942 01:49:49 -- event/cpu_locks.sh@116 -- # waitforlisten 113825 /var/tmp/spdk.sock 00:19:49.942 01:49:49 -- common/autotest_common.sh@817 -- # '[' -z 113825 ']' 00:19:49.942 01:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.942 01:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:49.942 01:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.942 01:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:49.942 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 [2024-04-24 01:49:49.475386] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:49.942 [2024-04-24 01:49:49.475546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113825 ] 00:19:49.942 [2024-04-24 01:49:49.637111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.942 [2024-04-24 01:49:49.857426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.878 01:49:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:50.878 01:49:50 -- common/autotest_common.sh@850 -- # return 0 00:19:50.878 01:49:50 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=113853 00:19:50.878 01:49:50 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:50.878 01:49:50 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 113853 /var/tmp/spdk2.sock 00:19:50.878 01:49:50 -- common/autotest_common.sh@638 -- # local es=0 00:19:50.878 01:49:50 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113853 /var/tmp/spdk2.sock 00:19:50.878 01:49:50 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:19:50.878 01:49:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:50.878 01:49:50 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:19:50.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:50.878 01:49:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:50.878 01:49:50 -- common/autotest_common.sh@641 -- # waitforlisten 113853 /var/tmp/spdk2.sock 00:19:50.878 01:49:50 -- common/autotest_common.sh@817 -- # '[' -z 113853 ']' 00:19:50.878 01:49:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:50.878 01:49:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:50.878 01:49:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:50.878 01:49:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:50.878 01:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.878 [2024-04-24 01:49:50.871111] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:50.878 [2024-04-24 01:49:50.871410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113853 ] 00:19:51.136 [2024-04-24 01:49:51.046102] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 113825 has claimed it. 00:19:51.136 [2024-04-24 01:49:51.046207] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:51.701 ERROR: process (pid: 113853) is no longer running 00:19:51.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113853) - No such process 00:19:51.701 01:49:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:51.701 01:49:51 -- common/autotest_common.sh@850 -- # return 1 00:19:51.701 01:49:51 -- common/autotest_common.sh@641 -- # es=1 00:19:51.701 01:49:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:51.701 01:49:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:51.701 01:49:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:51.701 01:49:51 -- event/cpu_locks.sh@122 -- # locks_exist 113825 00:19:51.701 01:49:51 -- event/cpu_locks.sh@22 -- # lslocks -p 113825 00:19:51.701 01:49:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:51.959 01:49:51 -- event/cpu_locks.sh@124 -- # killprocess 113825 00:19:51.959 01:49:51 -- common/autotest_common.sh@936 -- # '[' -z 113825 ']' 00:19:51.959 01:49:51 -- common/autotest_common.sh@940 -- # kill -0 113825 00:19:51.959 01:49:51 -- common/autotest_common.sh@941 -- # uname 00:19:51.959 01:49:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.959 01:49:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113825 00:19:51.959 killing process with pid 113825 00:19:51.959 01:49:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:51.959 01:49:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:51.959 01:49:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113825' 00:19:51.959 01:49:51 -- common/autotest_common.sh@955 -- # kill 113825 00:19:51.959 01:49:51 -- common/autotest_common.sh@960 -- # wait 113825 00:19:55.250 ************************************ 00:19:55.250 END TEST locking_app_on_locked_coremask 00:19:55.250 ************************************ 00:19:55.250 00:19:55.250 real 0m5.333s 00:19:55.250 user 0m5.818s 00:19:55.250 sys 0m0.727s 00:19:55.250 01:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.250 01:49:54 -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 01:49:54 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:19:55.250 01:49:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.250 01:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.250 01:49:54 -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 ************************************ 00:19:55.250 START TEST locking_overlapped_coremask 00:19:55.250 ************************************ 00:19:55.250 01:49:54 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:19:55.250 01:49:54 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113938 00:19:55.250 01:49:54 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:55.250 01:49:54 -- event/cpu_locks.sh@133 -- # waitforlisten 113938 /var/tmp/spdk.sock 00:19:55.250 01:49:54 -- common/autotest_common.sh@817 -- # '[' -z 113938 ']' 00:19:55.250 01:49:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.250 01:49:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.250 01:49:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.250 01:49:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.250 01:49:54 -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 [2024-04-24 01:49:54.941919] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:55.250 [2024-04-24 01:49:54.942480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113938 ] 00:19:55.250 [2024-04-24 01:49:55.153401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.508 [2024-04-24 01:49:55.413826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.508 [2024-04-24 01:49:55.413889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.508 [2024-04-24 01:49:55.413887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.441 01:49:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.441 01:49:56 -- common/autotest_common.sh@850 -- # return 0 00:19:56.441 01:49:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113961 00:19:56.441 01:49:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113961 /var/tmp/spdk2.sock 00:19:56.441 01:49:56 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:19:56.441 01:49:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:56.441 01:49:56 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113961 /var/tmp/spdk2.sock 00:19:56.441 01:49:56 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:19:56.441 01:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:56.441 01:49:56 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:19:56.441 01:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:56.441 01:49:56 -- common/autotest_common.sh@641 -- # waitforlisten 113961 /var/tmp/spdk2.sock 00:19:56.441 01:49:56 -- common/autotest_common.sh@817 -- # '[' -z 113961 ']' 00:19:56.441 01:49:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:56.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:56.441 01:49:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:56.441 01:49:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:56.441 01:49:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:56.441 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:56.441 [2024-04-24 01:49:56.389157] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:19:56.441 [2024-04-24 01:49:56.389920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113961 ] 00:19:56.699 [2024-04-24 01:49:56.571561] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113938 has claimed it. 00:19:56.699 [2024-04-24 01:49:56.571659] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:57.263 ERROR: process (pid: 113961) is no longer running 00:19:57.263 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113961) - No such process 00:19:57.264 01:49:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:57.264 01:49:57 -- common/autotest_common.sh@850 -- # return 1 00:19:57.264 01:49:57 -- common/autotest_common.sh@641 -- # es=1 00:19:57.264 01:49:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:57.264 01:49:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:57.264 01:49:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:57.264 01:49:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:19:57.264 01:49:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:57.264 01:49:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:57.264 01:49:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:57.264 01:49:57 -- event/cpu_locks.sh@141 -- # killprocess 113938 00:19:57.264 01:49:57 -- common/autotest_common.sh@936 -- # '[' -z 113938 ']' 00:19:57.264 01:49:57 -- common/autotest_common.sh@940 -- # kill -0 113938 00:19:57.264 01:49:57 -- common/autotest_common.sh@941 -- # uname 00:19:57.264 01:49:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.264 01:49:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113938 00:19:57.264 01:49:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:57.264 killing process with pid 113938 00:19:57.264 01:49:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:57.264 01:49:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113938' 00:19:57.264 01:49:57 -- common/autotest_common.sh@955 -- # kill 113938 00:19:57.264 01:49:57 -- common/autotest_common.sh@960 -- # wait 113938 00:20:00.604 00:20:00.604 real 0m5.105s 00:20:00.604 user 0m13.475s 00:20:00.604 sys 0m0.655s 00:20:00.604 01:49:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:00.604 01:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 ************************************ 00:20:00.604 END TEST locking_overlapped_coremask 00:20:00.604 ************************************ 00:20:00.604 01:49:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:20:00.604 01:49:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:00.604 01:49:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.604 01:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 ************************************ 00:20:00.604 START TEST locking_overlapped_coremask_via_rpc 00:20:00.604 ************************************ 00:20:00.604 01:50:00 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:20:00.604 01:50:00 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114051 00:20:00.604 01:50:00 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:20:00.604 01:50:00 -- event/cpu_locks.sh@149 -- # waitforlisten 114051 /var/tmp/spdk.sock 00:20:00.604 01:50:00 -- common/autotest_common.sh@817 -- # '[' -z 114051 ']' 00:20:00.604 01:50:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.604 01:50:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.604 01:50:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.604 01:50:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.604 01:50:00 -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 [2024-04-24 01:50:00.107392] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:00.604 [2024-04-24 01:50:00.107589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114051 ] 00:20:00.604 [2024-04-24 01:50:00.295809] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:00.604 [2024-04-24 01:50:00.295893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:00.604 [2024-04-24 01:50:00.517669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.604 [2024-04-24 01:50:00.517743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.604 [2024-04-24 01:50:00.517752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.569 01:50:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.569 01:50:01 -- common/autotest_common.sh@850 -- # return 0 00:20:01.569 01:50:01 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=114074 00:20:01.569 01:50:01 -- event/cpu_locks.sh@153 -- # waitforlisten 114074 /var/tmp/spdk2.sock 00:20:01.569 01:50:01 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:20:01.569 01:50:01 -- common/autotest_common.sh@817 -- # '[' -z 114074 ']' 00:20:01.569 01:50:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:01.569 01:50:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:01.569 01:50:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:01.569 01:50:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.569 01:50:01 -- common/autotest_common.sh@10 -- # set +x 00:20:01.569 [2024-04-24 01:50:01.536047] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:01.569 [2024-04-24 01:50:01.536630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114074 ] 00:20:01.827 [2024-04-24 01:50:01.722399] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:01.827 [2024-04-24 01:50:01.722476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.392 [2024-04-24 01:50:02.185749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.392 [2024-04-24 01:50:02.185905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.392 [2024-04-24 01:50:02.185911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:04.296 01:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.296 01:50:04 -- common/autotest_common.sh@850 -- # return 0 00:20:04.296 01:50:04 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:04.296 01:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.296 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 01:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.296 01:50:04 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:04.296 01:50:04 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.296 01:50:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:04.296 01:50:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:04.296 01:50:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.296 01:50:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:04.296 01:50:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.296 01:50:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:04.296 01:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.296 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 [2024-04-24 01:50:04.304288] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114051 has claimed it. 00:20:04.296 request: 00:20:04.296 { 00:20:04.296 "method": "framework_enable_cpumask_locks", 00:20:04.296 "req_id": 1 00:20:04.296 } 00:20:04.296 Got JSON-RPC error response 00:20:04.296 response: 00:20:04.296 { 00:20:04.296 "code": -32603, 00:20:04.296 "message": "Failed to claim CPU core: 2" 00:20:04.296 } 00:20:04.296 01:50:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:04.296 01:50:04 -- common/autotest_common.sh@641 -- # es=1 00:20:04.296 01:50:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:04.296 01:50:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:04.296 01:50:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:04.296 01:50:04 -- event/cpu_locks.sh@158 -- # waitforlisten 114051 /var/tmp/spdk.sock 00:20:04.296 01:50:04 -- common/autotest_common.sh@817 -- # '[' -z 114051 ']' 00:20:04.296 01:50:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.296 01:50:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.296 01:50:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.296 01:50:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.296 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 01:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.862 01:50:04 -- common/autotest_common.sh@850 -- # return 0 00:20:04.862 01:50:04 -- event/cpu_locks.sh@159 -- # waitforlisten 114074 /var/tmp/spdk2.sock 00:20:04.862 01:50:04 -- common/autotest_common.sh@817 -- # '[' -z 114074 ']' 00:20:04.862 01:50:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:04.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:04.862 01:50:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.862 01:50:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:04.862 01:50:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.862 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 01:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.862 01:50:04 -- common/autotest_common.sh@850 -- # return 0 00:20:04.862 01:50:04 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:04.862 01:50:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:04.862 01:50:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:04.862 01:50:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:04.862 00:20:04.862 real 0m4.858s 00:20:04.862 user 0m1.659s 00:20:04.862 sys 0m0.230s 00:20:04.862 01:50:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:04.862 ************************************ 00:20:04.862 END TEST locking_overlapped_coremask_via_rpc 00:20:04.862 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 ************************************ 00:20:04.862 01:50:04 -- event/cpu_locks.sh@174 -- # cleanup 00:20:04.862 01:50:04 -- event/cpu_locks.sh@15 -- # [[ -z 114051 ]] 00:20:04.862 01:50:04 -- event/cpu_locks.sh@15 -- # killprocess 114051 00:20:04.862 01:50:04 -- common/autotest_common.sh@936 -- # '[' -z 114051 ']' 00:20:04.862 01:50:04 -- common/autotest_common.sh@940 -- # kill -0 114051 00:20:04.862 01:50:04 -- common/autotest_common.sh@941 -- # uname 00:20:04.863 01:50:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.863 01:50:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114051 00:20:04.863 01:50:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:04.863 killing process with pid 114051 00:20:04.863 01:50:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:04.863 01:50:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114051' 00:20:04.863 01:50:04 -- common/autotest_common.sh@955 -- # kill 114051 00:20:04.863 01:50:04 -- common/autotest_common.sh@960 -- # wait 114051 00:20:08.149 01:50:07 -- event/cpu_locks.sh@16 -- # [[ -z 114074 ]] 00:20:08.149 01:50:07 -- event/cpu_locks.sh@16 -- # killprocess 114074 00:20:08.149 01:50:07 -- common/autotest_common.sh@936 -- # '[' -z 114074 ']' 00:20:08.149 01:50:07 -- common/autotest_common.sh@940 -- # kill -0 114074 00:20:08.149 01:50:07 -- common/autotest_common.sh@941 -- # uname 00:20:08.149 01:50:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.149 01:50:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114074 00:20:08.149 killing process with pid 114074 00:20:08.149 01:50:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:08.149 01:50:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:08.149 01:50:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114074' 00:20:08.149 01:50:07 -- common/autotest_common.sh@955 -- # kill 114074 00:20:08.149 01:50:07 -- common/autotest_common.sh@960 -- # wait 114074 00:20:10.751 01:50:10 -- event/cpu_locks.sh@18 -- # rm -f 00:20:10.751 01:50:10 -- event/cpu_locks.sh@1 -- # cleanup 00:20:10.751 01:50:10 -- event/cpu_locks.sh@15 -- # [[ -z 114051 ]] 00:20:10.751 01:50:10 -- event/cpu_locks.sh@15 -- # killprocess 114051 00:20:10.751 01:50:10 -- common/autotest_common.sh@936 -- # '[' -z 114051 ']' 00:20:10.751 Process with pid 114051 is not found 00:20:10.751 01:50:10 -- common/autotest_common.sh@940 -- # kill -0 114051 00:20:10.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (114051) - No such process 00:20:10.751 01:50:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 114051 is not found' 00:20:10.751 01:50:10 -- event/cpu_locks.sh@16 -- # [[ -z 114074 ]] 00:20:10.751 01:50:10 -- event/cpu_locks.sh@16 -- # killprocess 114074 00:20:10.751 01:50:10 -- common/autotest_common.sh@936 -- # '[' -z 114074 ']' 00:20:10.751 01:50:10 -- common/autotest_common.sh@940 -- # kill -0 114074 00:20:10.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (114074) - No such process 00:20:10.751 Process with pid 114074 is not found 00:20:10.751 01:50:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 114074 is not found' 00:20:10.751 01:50:10 -- event/cpu_locks.sh@18 -- # rm -f 00:20:10.751 00:20:10.751 real 0m55.508s 00:20:10.751 user 1m35.694s 00:20:10.751 sys 0m6.984s 00:20:10.751 01:50:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.751 01:50:10 -- common/autotest_common.sh@10 -- # set +x 00:20:10.751 ************************************ 00:20:10.751 END TEST cpu_locks 00:20:10.751 ************************************ 00:20:10.751 00:20:10.751 real 1m28.389s 00:20:10.751 user 2m37.590s 00:20:10.751 sys 0m11.546s 00:20:10.751 01:50:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.751 01:50:10 -- common/autotest_common.sh@10 -- # set +x 00:20:10.751 ************************************ 00:20:10.751 END TEST event 00:20:10.751 ************************************ 00:20:10.751 01:50:10 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:10.751 01:50:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:10.751 01:50:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.751 01:50:10 -- common/autotest_common.sh@10 -- # set +x 00:20:10.751 ************************************ 00:20:10.751 START TEST thread 00:20:10.751 ************************************ 00:20:10.751 01:50:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:10.751 * Looking for test storage... 00:20:10.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:20:10.751 01:50:10 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:10.751 01:50:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:20:10.752 01:50:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.752 01:50:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.029 ************************************ 00:20:11.029 START TEST thread_poller_perf 00:20:11.029 ************************************ 00:20:11.029 01:50:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:11.029 [2024-04-24 01:50:10.884481] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:11.029 [2024-04-24 01:50:10.884636] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114293 ] 00:20:11.029 [2024-04-24 01:50:11.050175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.312 [2024-04-24 01:50:11.266713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.312 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:20:12.712 ====================================== 00:20:12.712 busy:2110211152 (cyc) 00:20:12.712 total_run_count: 327000 00:20:12.712 tsc_hz: 2100000000 (cyc) 00:20:12.712 ====================================== 00:20:12.712 poller_cost: 6453 (cyc), 3072 (nsec) 00:20:12.712 00:20:12.712 real 0m1.853s 00:20:12.712 user 0m1.617s 00:20:12.712 sys 0m0.136s 00:20:12.712 01:50:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.712 01:50:12 -- common/autotest_common.sh@10 -- # set +x 00:20:12.712 ************************************ 00:20:12.712 END TEST thread_poller_perf 00:20:12.712 ************************************ 00:20:12.712 01:50:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:12.712 01:50:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:20:12.712 01:50:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.712 01:50:12 -- common/autotest_common.sh@10 -- # set +x 00:20:12.712 ************************************ 00:20:12.712 START TEST thread_poller_perf 00:20:12.712 ************************************ 00:20:12.712 01:50:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:12.996 [2024-04-24 01:50:12.811882] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:12.996 [2024-04-24 01:50:12.812057] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114347 ] 00:20:12.996 [2024-04-24 01:50:12.970658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.281 [2024-04-24 01:50:13.193437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.281 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:20:14.666 ====================================== 00:20:14.666 busy:2103550078 (cyc) 00:20:14.666 total_run_count: 4207000 00:20:14.666 tsc_hz: 2100000000 (cyc) 00:20:14.666 ====================================== 00:20:14.666 poller_cost: 500 (cyc), 238 (nsec) 00:20:14.666 00:20:14.666 real 0m1.850s 00:20:14.666 user 0m1.646s 00:20:14.666 sys 0m0.104s 00:20:14.666 01:50:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:14.666 01:50:14 -- common/autotest_common.sh@10 -- # set +x 00:20:14.666 ************************************ 00:20:14.666 END TEST thread_poller_perf 00:20:14.666 ************************************ 00:20:14.666 01:50:14 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:20:14.666 01:50:14 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:20:14.666 01:50:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:14.666 01:50:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.666 01:50:14 -- common/autotest_common.sh@10 -- # set +x 00:20:14.666 ************************************ 00:20:14.666 START TEST thread_spdk_lock 00:20:14.666 ************************************ 00:20:14.666 01:50:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:20:14.925 [2024-04-24 01:50:14.773299] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:14.925 [2024-04-24 01:50:14.773568] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114399 ] 00:20:14.925 [2024-04-24 01:50:14.950189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:15.183 [2024-04-24 01:50:15.146863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.183 [2024-04-24 01:50:15.146872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.749 [2024-04-24 01:50:15.661311] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:20:15.749 [2024-04-24 01:50:15.661442] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:20:15.749 [2024-04-24 01:50:15.661471] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55a5b6a87600 00:20:15.749 [2024-04-24 01:50:15.670852] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:20:15.749 [2024-04-24 01:50:15.670956] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:20:15.749 [2024-04-24 01:50:15.670995] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:20:16.316 Starting test contend 00:20:16.316 Worker Delay Wait us Hold us Total us 00:20:16.316 0 3 124689 191665 316355 00:20:16.316 1 5 61115 293763 354878 00:20:16.316 PASS test contend 00:20:16.316 Starting test hold_by_poller 00:20:16.316 PASS test hold_by_poller 00:20:16.316 Starting test hold_by_message 00:20:16.316 PASS test hold_by_message 00:20:16.316 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:20:16.316 100014 assertions passed 00:20:16.316 0 assertions failed 00:20:16.316 00:20:16.316 real 0m1.451s 00:20:16.316 user 0m1.744s 00:20:16.316 sys 0m0.132s 00:20:16.316 01:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.316 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.316 ************************************ 00:20:16.316 END TEST thread_spdk_lock 00:20:16.316 ************************************ 00:20:16.316 00:20:16.316 real 0m5.481s 00:20:16.316 user 0m5.164s 00:20:16.316 sys 0m0.552s 00:20:16.316 01:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.316 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.316 ************************************ 00:20:16.316 END TEST thread 00:20:16.316 ************************************ 00:20:16.316 01:50:16 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:20:16.316 01:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:16.316 01:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.316 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.316 ************************************ 00:20:16.316 START TEST accel 00:20:16.316 ************************************ 00:20:16.316 01:50:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:20:16.316 * Looking for test storage... 00:20:16.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:20:16.316 01:50:16 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:20:16.316 01:50:16 -- accel/accel.sh@82 -- # get_expected_opcs 00:20:16.316 01:50:16 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:20:16.316 01:50:16 -- accel/accel.sh@62 -- # spdk_tgt_pid=114491 00:20:16.316 01:50:16 -- accel/accel.sh@63 -- # waitforlisten 114491 00:20:16.316 01:50:16 -- common/autotest_common.sh@817 -- # '[' -z 114491 ']' 00:20:16.316 01:50:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.316 01:50:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.316 01:50:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.316 01:50:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.316 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.316 01:50:16 -- accel/accel.sh@61 -- # build_accel_config 00:20:16.316 01:50:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:16.316 01:50:16 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:20:16.316 01:50:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:16.316 01:50:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:16.316 01:50:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:16.316 01:50:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:16.316 01:50:16 -- accel/accel.sh@40 -- # local IFS=, 00:20:16.316 01:50:16 -- accel/accel.sh@41 -- # jq -r . 00:20:16.575 [2024-04-24 01:50:16.434822] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:16.575 [2024-04-24 01:50:16.435048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114491 ] 00:20:16.575 [2024-04-24 01:50:16.652387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.833 [2024-04-24 01:50:16.888154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.768 01:50:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.768 01:50:17 -- common/autotest_common.sh@850 -- # return 0 00:20:17.768 01:50:17 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:20:17.768 01:50:17 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:20:17.768 01:50:17 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:20:17.768 01:50:17 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:20:17.768 01:50:17 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:20:17.768 01:50:17 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:20:17.768 01:50:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.768 01:50:17 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:20:17.768 01:50:17 -- common/autotest_common.sh@10 -- # set +x 00:20:17.768 01:50:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # IFS== 00:20:18.026 01:50:17 -- accel/accel.sh@72 -- # read -r opc module 00:20:18.026 01:50:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:20:18.026 01:50:17 -- accel/accel.sh@75 -- # killprocess 114491 00:20:18.026 01:50:17 -- common/autotest_common.sh@936 -- # '[' -z 114491 ']' 00:20:18.026 01:50:17 -- common/autotest_common.sh@940 -- # kill -0 114491 00:20:18.026 01:50:17 -- common/autotest_common.sh@941 -- # uname 00:20:18.026 01:50:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.026 01:50:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114491 00:20:18.026 01:50:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:18.026 killing process with pid 114491 00:20:18.026 01:50:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:18.026 01:50:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114491' 00:20:18.026 01:50:17 -- common/autotest_common.sh@955 -- # kill 114491 00:20:18.026 01:50:17 -- common/autotest_common.sh@960 -- # wait 114491 00:20:20.598 01:50:20 -- accel/accel.sh@76 -- # trap - ERR 00:20:20.598 01:50:20 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:20:20.598 01:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.598 01:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.598 01:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:20.598 01:50:20 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:20:20.598 01:50:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:20:20.598 01:50:20 -- accel/accel.sh@12 -- # build_accel_config 00:20:20.598 01:50:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:20.598 01:50:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:20.598 01:50:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:20.598 01:50:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:20.598 01:50:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:20.598 01:50:20 -- accel/accel.sh@40 -- # local IFS=, 00:20:20.598 01:50:20 -- accel/accel.sh@41 -- # jq -r . 00:20:20.598 01:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:20.598 01:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:20.857 01:50:20 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:20:20.857 01:50:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:20.857 01:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.857 01:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:20.857 ************************************ 00:20:20.857 START TEST accel_missing_filename 00:20:20.857 ************************************ 00:20:20.857 01:50:20 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:20:20.857 01:50:20 -- common/autotest_common.sh@638 -- # local es=0 00:20:20.857 01:50:20 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:20:20.857 01:50:20 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:20:20.857 01:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.857 01:50:20 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:20:20.857 01:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.857 01:50:20 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:20:20.857 01:50:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:20:20.857 01:50:20 -- accel/accel.sh@12 -- # build_accel_config 00:20:20.857 01:50:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:20.857 01:50:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:20.857 01:50:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:20.857 01:50:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:20.857 01:50:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:20.857 01:50:20 -- accel/accel.sh@40 -- # local IFS=, 00:20:20.857 01:50:20 -- accel/accel.sh@41 -- # jq -r . 00:20:20.857 [2024-04-24 01:50:20.801860] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:20.857 [2024-04-24 01:50:20.802045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114595 ] 00:20:21.116 [2024-04-24 01:50:20.981556] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.383 [2024-04-24 01:50:21.211577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.641 [2024-04-24 01:50:21.474743] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:22.207 [2024-04-24 01:50:22.092182] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:20:22.466 A filename is required. 00:20:22.466 01:50:22 -- common/autotest_common.sh@641 -- # es=234 00:20:22.466 01:50:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:22.466 01:50:22 -- common/autotest_common.sh@650 -- # es=106 00:20:22.466 01:50:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:20:22.466 01:50:22 -- common/autotest_common.sh@658 -- # es=1 00:20:22.466 01:50:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:22.466 00:20:22.466 real 0m1.799s 00:20:22.466 user 0m1.548s 00:20:22.466 sys 0m0.198s 00:20:22.466 01:50:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:22.466 01:50:22 -- common/autotest_common.sh@10 -- # set +x 00:20:22.466 ************************************ 00:20:22.466 END TEST accel_missing_filename 00:20:22.466 ************************************ 00:20:22.725 01:50:22 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:20:22.725 01:50:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:20:22.725 01:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.725 01:50:22 -- common/autotest_common.sh@10 -- # set +x 00:20:22.725 ************************************ 00:20:22.725 START TEST accel_compress_verify 00:20:22.725 ************************************ 00:20:22.725 01:50:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:20:22.725 01:50:22 -- common/autotest_common.sh@638 -- # local es=0 00:20:22.725 01:50:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:20:22.726 01:50:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:20:22.726 01:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:22.726 01:50:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:20:22.726 01:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:22.726 01:50:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:20:22.726 01:50:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:20:22.726 01:50:22 -- accel/accel.sh@12 -- # build_accel_config 00:20:22.726 01:50:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:22.726 01:50:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:22.726 01:50:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:22.726 01:50:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:22.726 01:50:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:22.726 01:50:22 -- accel/accel.sh@40 -- # local IFS=, 00:20:22.726 01:50:22 -- accel/accel.sh@41 -- # jq -r . 00:20:22.726 [2024-04-24 01:50:22.699304] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:22.726 [2024-04-24 01:50:22.699489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114650 ] 00:20:22.983 [2024-04-24 01:50:22.878122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.242 [2024-04-24 01:50:23.104227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.501 [2024-04-24 01:50:23.360007] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:24.067 [2024-04-24 01:50:23.979990] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:20:24.634 00:20:24.634 Compression does not support the verify option, aborting. 00:20:24.634 01:50:24 -- common/autotest_common.sh@641 -- # es=161 00:20:24.634 01:50:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:24.634 01:50:24 -- common/autotest_common.sh@650 -- # es=33 00:20:24.634 01:50:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:20:24.634 01:50:24 -- common/autotest_common.sh@658 -- # es=1 00:20:24.634 01:50:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:24.634 00:20:24.634 real 0m1.793s 00:20:24.634 user 0m1.559s 00:20:24.634 sys 0m0.168s 00:20:24.634 01:50:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:24.634 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 ************************************ 00:20:24.634 END TEST accel_compress_verify 00:20:24.634 ************************************ 00:20:24.634 01:50:24 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:20:24.634 01:50:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:24.634 01:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.634 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 ************************************ 00:20:24.634 START TEST accel_wrong_workload 00:20:24.634 ************************************ 00:20:24.634 01:50:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:20:24.634 01:50:24 -- common/autotest_common.sh@638 -- # local es=0 00:20:24.634 01:50:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:20:24.634 01:50:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:20:24.634 01:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:24.634 01:50:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:20:24.634 01:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:24.634 01:50:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:20:24.634 01:50:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:20:24.634 01:50:24 -- accel/accel.sh@12 -- # build_accel_config 00:20:24.634 01:50:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:24.634 01:50:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:24.634 01:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:24.634 01:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:24.634 01:50:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:24.634 01:50:24 -- accel/accel.sh@40 -- # local IFS=, 00:20:24.634 01:50:24 -- accel/accel.sh@41 -- # jq -r . 00:20:24.634 Unsupported workload type: foobar 00:20:24.634 [2024-04-24 01:50:24.590849] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:20:24.634 accel_perf options: 00:20:24.634 [-h help message] 00:20:24.634 [-q queue depth per core] 00:20:24.634 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:20:24.634 [-T number of threads per core 00:20:24.634 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:20:24.634 [-t time in seconds] 00:20:24.634 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:20:24.634 [ dif_verify, , dif_generate, dif_generate_copy 00:20:24.634 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:20:24.634 [-l for compress/decompress workloads, name of uncompressed input file 00:20:24.634 [-S for crc32c workload, use this seed value (default 0) 00:20:24.634 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:20:24.634 [-f for fill workload, use this BYTE value (default 255) 00:20:24.634 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:20:24.634 [-y verify result if this switch is on] 00:20:24.634 [-a tasks to allocate per core (default: same value as -q)] 00:20:24.634 Can be used to spread operations across a wider range of memory. 00:20:24.634 01:50:24 -- common/autotest_common.sh@641 -- # es=1 00:20:24.634 01:50:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:24.634 01:50:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:24.634 01:50:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:24.634 00:20:24.634 real 0m0.081s 00:20:24.634 user 0m0.074s 00:20:24.634 sys 0m0.052s 00:20:24.634 01:50:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:24.634 ************************************ 00:20:24.634 END TEST accel_wrong_workload 00:20:24.634 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 ************************************ 00:20:24.634 01:50:24 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:20:24.634 01:50:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:20:24.634 01:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.634 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 ************************************ 00:20:24.634 START TEST accel_negative_buffers 00:20:24.634 ************************************ 00:20:24.634 01:50:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:20:24.634 01:50:24 -- common/autotest_common.sh@638 -- # local es=0 00:20:24.634 01:50:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:20:24.634 01:50:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:20:24.634 01:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:24.634 01:50:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:20:24.893 01:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:24.894 01:50:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:20:24.894 01:50:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:20:24.894 01:50:24 -- accel/accel.sh@12 -- # build_accel_config 00:20:24.894 01:50:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:24.894 01:50:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:24.894 01:50:24 -- accel/accel.sh@40 -- # local IFS=, 00:20:24.894 01:50:24 -- accel/accel.sh@41 -- # jq -r . 00:20:24.894 -x option must be non-negative. 00:20:24.894 [2024-04-24 01:50:24.756790] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:20:24.894 accel_perf options: 00:20:24.894 [-h help message] 00:20:24.894 [-q queue depth per core] 00:20:24.894 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:20:24.894 [-T number of threads per core 00:20:24.894 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:20:24.894 [-t time in seconds] 00:20:24.894 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:20:24.894 [ dif_verify, , dif_generate, dif_generate_copy 00:20:24.894 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:20:24.894 [-l for compress/decompress workloads, name of uncompressed input file 00:20:24.894 [-S for crc32c workload, use this seed value (default 0) 00:20:24.894 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:20:24.894 [-f for fill workload, use this BYTE value (default 255) 00:20:24.894 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:20:24.894 [-y verify result if this switch is on] 00:20:24.894 [-a tasks to allocate per core (default: same value as -q)] 00:20:24.894 Can be used to spread operations across a wider range of memory. 00:20:24.894 01:50:24 -- common/autotest_common.sh@641 -- # es=1 00:20:24.894 01:50:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:24.894 01:50:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:24.894 01:50:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:24.894 00:20:24.894 real 0m0.075s 00:20:24.894 user 0m0.080s 00:20:24.894 sys 0m0.044s 00:20:24.894 01:50:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:24.894 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.894 ************************************ 00:20:24.894 END TEST accel_negative_buffers 00:20:24.894 ************************************ 00:20:24.894 01:50:24 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:20:24.894 01:50:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:20:24.894 01:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.894 01:50:24 -- common/autotest_common.sh@10 -- # set +x 00:20:24.894 ************************************ 00:20:24.894 START TEST accel_crc32c 00:20:24.894 ************************************ 00:20:24.894 01:50:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:20:24.894 01:50:24 -- accel/accel.sh@16 -- # local accel_opc 00:20:24.894 01:50:24 -- accel/accel.sh@17 -- # local accel_module 00:20:24.894 01:50:24 -- accel/accel.sh@19 -- # IFS=: 00:20:24.894 01:50:24 -- accel/accel.sh@19 -- # read -r var val 00:20:24.894 01:50:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:20:24.894 01:50:24 -- accel/accel.sh@12 -- # build_accel_config 00:20:24.894 01:50:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:20:24.894 01:50:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:24.894 01:50:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:24.894 01:50:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:24.894 01:50:24 -- accel/accel.sh@40 -- # local IFS=, 00:20:24.894 01:50:24 -- accel/accel.sh@41 -- # jq -r . 00:20:24.894 [2024-04-24 01:50:24.943041] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:24.894 [2024-04-24 01:50:24.943307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114758 ] 00:20:25.153 [2024-04-24 01:50:25.126359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.412 [2024-04-24 01:50:25.457795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=0x1 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=crc32c 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=32 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=software 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@22 -- # accel_module=software 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=32 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=32 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=1 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val=Yes 00:20:25.671 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.671 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.671 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.672 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.672 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.672 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:25.672 01:50:25 -- accel/accel.sh@20 -- # val= 00:20:25.672 01:50:25 -- accel/accel.sh@21 -- # case "$var" in 00:20:25.672 01:50:25 -- accel/accel.sh@19 -- # IFS=: 00:20:25.672 01:50:25 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@20 -- # val= 00:20:28.204 01:50:27 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:28.204 01:50:27 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:20:28.204 01:50:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:28.204 00:20:28.204 real 0m2.940s 00:20:28.204 user 0m2.678s 00:20:28.204 sys 0m0.185s 00:20:28.204 01:50:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:28.204 ************************************ 00:20:28.204 END TEST accel_crc32c 00:20:28.204 ************************************ 00:20:28.204 01:50:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.204 01:50:27 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:20:28.204 01:50:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:20:28.204 01:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.204 01:50:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.204 ************************************ 00:20:28.204 START TEST accel_crc32c_C2 00:20:28.204 ************************************ 00:20:28.204 01:50:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:20:28.204 01:50:27 -- accel/accel.sh@16 -- # local accel_opc 00:20:28.204 01:50:27 -- accel/accel.sh@17 -- # local accel_module 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # IFS=: 00:20:28.204 01:50:27 -- accel/accel.sh@19 -- # read -r var val 00:20:28.204 01:50:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:20:28.204 01:50:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:20:28.204 01:50:27 -- accel/accel.sh@12 -- # build_accel_config 00:20:28.204 01:50:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:28.204 01:50:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:28.204 01:50:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:28.204 01:50:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:28.204 01:50:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:28.204 01:50:27 -- accel/accel.sh@40 -- # local IFS=, 00:20:28.204 01:50:27 -- accel/accel.sh@41 -- # jq -r . 00:20:28.204 [2024-04-24 01:50:27.973540] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:28.204 [2024-04-24 01:50:27.973824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114824 ] 00:20:28.204 [2024-04-24 01:50:28.153851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.461 [2024-04-24 01:50:28.401732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=0x1 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=crc32c 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=0 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=software 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@22 -- # accel_module=software 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=32 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=32 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=1 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val=Yes 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.719 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.719 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:28.719 01:50:28 -- accel/accel.sh@20 -- # val= 00:20:28.720 01:50:28 -- accel/accel.sh@21 -- # case "$var" in 00:20:28.720 01:50:28 -- accel/accel.sh@19 -- # IFS=: 00:20:28.720 01:50:28 -- accel/accel.sh@19 -- # read -r var val 00:20:31.249 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@20 -- # val= 00:20:31.250 01:50:30 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:31.250 01:50:30 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:20:31.250 01:50:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:31.250 00:20:31.250 real 0m2.848s 00:20:31.250 user 0m2.565s 00:20:31.250 sys 0m0.199s 00:20:31.250 01:50:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:31.250 01:50:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.250 ************************************ 00:20:31.250 END TEST accel_crc32c_C2 00:20:31.250 ************************************ 00:20:31.250 01:50:30 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:20:31.250 01:50:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:31.250 01:50:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:31.250 01:50:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.250 ************************************ 00:20:31.250 START TEST accel_copy 00:20:31.250 ************************************ 00:20:31.250 01:50:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:20:31.250 01:50:30 -- accel/accel.sh@16 -- # local accel_opc 00:20:31.250 01:50:30 -- accel/accel.sh@17 -- # local accel_module 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # IFS=: 00:20:31.250 01:50:30 -- accel/accel.sh@19 -- # read -r var val 00:20:31.250 01:50:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:20:31.250 01:50:30 -- accel/accel.sh@12 -- # build_accel_config 00:20:31.250 01:50:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:20:31.250 01:50:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:31.250 01:50:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:31.250 01:50:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:31.250 01:50:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:31.250 01:50:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:31.250 01:50:30 -- accel/accel.sh@40 -- # local IFS=, 00:20:31.250 01:50:30 -- accel/accel.sh@41 -- # jq -r . 00:20:31.250 [2024-04-24 01:50:30.907903] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:31.250 [2024-04-24 01:50:30.908571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114893 ] 00:20:31.250 [2024-04-24 01:50:31.086057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.250 [2024-04-24 01:50:31.328503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=0x1 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=copy 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@23 -- # accel_opc=copy 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=software 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@22 -- # accel_module=software 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=32 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=32 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=1 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val=Yes 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:31.817 01:50:31 -- accel/accel.sh@20 -- # val= 00:20:31.817 01:50:31 -- accel/accel.sh@21 -- # case "$var" in 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # IFS=: 00:20:31.817 01:50:31 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@20 -- # val= 00:20:33.716 01:50:33 -- accel/accel.sh@21 -- # case "$var" in 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.716 01:50:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:33.716 01:50:33 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:20:33.716 01:50:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.716 00:20:33.716 real 0m2.840s 00:20:33.716 user 0m2.551s 00:20:33.716 sys 0m0.196s 00:20:33.716 01:50:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:33.716 ************************************ 00:20:33.716 END TEST accel_copy 00:20:33.716 ************************************ 00:20:33.716 01:50:33 -- common/autotest_common.sh@10 -- # set +x 00:20:33.716 01:50:33 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:20:33.716 01:50:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:20:33.716 01:50:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.716 01:50:33 -- common/autotest_common.sh@10 -- # set +x 00:20:33.716 ************************************ 00:20:33.716 START TEST accel_fill 00:20:33.716 ************************************ 00:20:33.716 01:50:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:20:33.716 01:50:33 -- accel/accel.sh@16 -- # local accel_opc 00:20:33.716 01:50:33 -- accel/accel.sh@17 -- # local accel_module 00:20:33.716 01:50:33 -- accel/accel.sh@19 -- # IFS=: 00:20:33.716 01:50:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:20:33.975 01:50:33 -- accel/accel.sh@19 -- # read -r var val 00:20:33.975 01:50:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:20:33.975 01:50:33 -- accel/accel.sh@12 -- # build_accel_config 00:20:33.975 01:50:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:33.975 01:50:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:33.975 01:50:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:33.975 01:50:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:33.975 01:50:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:33.975 01:50:33 -- accel/accel.sh@40 -- # local IFS=, 00:20:33.975 01:50:33 -- accel/accel.sh@41 -- # jq -r . 00:20:33.975 [2024-04-24 01:50:33.853353] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:33.975 [2024-04-24 01:50:33.853553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114955 ] 00:20:33.975 [2024-04-24 01:50:34.034205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.233 [2024-04-24 01:50:34.277845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val=0x1 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.491 01:50:34 -- accel/accel.sh@20 -- # val=fill 00:20:34.491 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.491 01:50:34 -- accel/accel.sh@23 -- # accel_opc=fill 00:20:34.491 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=0x80 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=software 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@22 -- # accel_module=software 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=64 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=64 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=1 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val=Yes 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:34.492 01:50:34 -- accel/accel.sh@20 -- # val= 00:20:34.492 01:50:34 -- accel/accel.sh@21 -- # case "$var" in 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # IFS=: 00:20:34.492 01:50:34 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@20 -- # val= 00:20:37.030 01:50:36 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:37.030 01:50:36 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:20:37.030 01:50:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:37.030 00:20:37.030 real 0m2.838s 00:20:37.030 user 0m2.590s 00:20:37.030 sys 0m0.164s 00:20:37.030 ************************************ 00:20:37.030 END TEST accel_fill 00:20:37.030 ************************************ 00:20:37.030 01:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:37.030 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:20:37.030 01:50:36 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:20:37.030 01:50:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:37.030 01:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.030 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:20:37.030 ************************************ 00:20:37.030 START TEST accel_copy_crc32c 00:20:37.030 ************************************ 00:20:37.030 01:50:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:20:37.030 01:50:36 -- accel/accel.sh@16 -- # local accel_opc 00:20:37.030 01:50:36 -- accel/accel.sh@17 -- # local accel_module 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # IFS=: 00:20:37.030 01:50:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:20:37.030 01:50:36 -- accel/accel.sh@19 -- # read -r var val 00:20:37.030 01:50:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:20:37.030 01:50:36 -- accel/accel.sh@12 -- # build_accel_config 00:20:37.030 01:50:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:37.030 01:50:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:37.030 01:50:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:37.030 01:50:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:37.030 01:50:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:37.030 01:50:36 -- accel/accel.sh@40 -- # local IFS=, 00:20:37.030 01:50:36 -- accel/accel.sh@41 -- # jq -r . 00:20:37.030 [2024-04-24 01:50:36.793799] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:37.030 [2024-04-24 01:50:36.794080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115022 ] 00:20:37.031 [2024-04-24 01:50:36.977538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.290 [2024-04-24 01:50:37.305832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.548 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.548 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.548 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.548 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.548 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.548 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.548 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.548 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.548 01:50:37 -- accel/accel.sh@20 -- # val=0x1 00:20:37.548 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=copy_crc32c 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=0 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=software 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@22 -- # accel_module=software 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=32 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=32 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=1 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val=Yes 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:37.549 01:50:37 -- accel/accel.sh@20 -- # val= 00:20:37.549 01:50:37 -- accel/accel.sh@21 -- # case "$var" in 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # IFS=: 00:20:37.549 01:50:37 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@20 -- # val= 00:20:40.099 01:50:39 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:40.099 01:50:39 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:20:40.099 01:50:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:40.099 00:20:40.099 real 0m2.990s 00:20:40.099 user 0m2.678s 00:20:40.099 sys 0m0.228s 00:20:40.099 ************************************ 00:20:40.099 END TEST accel_copy_crc32c 00:20:40.099 ************************************ 00:20:40.099 01:50:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:40.099 01:50:39 -- common/autotest_common.sh@10 -- # set +x 00:20:40.099 01:50:39 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:20:40.099 01:50:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:20:40.099 01:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.099 01:50:39 -- common/autotest_common.sh@10 -- # set +x 00:20:40.099 ************************************ 00:20:40.099 START TEST accel_copy_crc32c_C2 00:20:40.099 ************************************ 00:20:40.099 01:50:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:20:40.099 01:50:39 -- accel/accel.sh@16 -- # local accel_opc 00:20:40.099 01:50:39 -- accel/accel.sh@17 -- # local accel_module 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # IFS=: 00:20:40.099 01:50:39 -- accel/accel.sh@19 -- # read -r var val 00:20:40.099 01:50:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:20:40.099 01:50:39 -- accel/accel.sh@12 -- # build_accel_config 00:20:40.099 01:50:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:20:40.099 01:50:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:40.099 01:50:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:40.099 01:50:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:40.099 01:50:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:40.099 01:50:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:40.099 01:50:39 -- accel/accel.sh@40 -- # local IFS=, 00:20:40.099 01:50:39 -- accel/accel.sh@41 -- # jq -r . 00:20:40.099 [2024-04-24 01:50:39.888643] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:40.099 [2024-04-24 01:50:39.888837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115089 ] 00:20:40.099 [2024-04-24 01:50:40.068636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.359 [2024-04-24 01:50:40.398756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=0x1 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=copy_crc32c 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=0 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val='8192 bytes' 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=software 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@22 -- # accel_module=software 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=32 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=32 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=1 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val=Yes 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:40.618 01:50:40 -- accel/accel.sh@20 -- # val= 00:20:40.618 01:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # IFS=: 00:20:40.618 01:50:40 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@20 -- # val= 00:20:43.149 01:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:43.149 01:50:42 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:20:43.149 01:50:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:43.149 00:20:43.149 real 0m3.040s 00:20:43.149 user 0m2.765s 00:20:43.149 sys 0m0.194s 00:20:43.149 01:50:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:43.149 01:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:43.149 ************************************ 00:20:43.149 END TEST accel_copy_crc32c_C2 00:20:43.149 ************************************ 00:20:43.149 01:50:42 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:20:43.149 01:50:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:43.149 01:50:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:43.149 01:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:43.149 ************************************ 00:20:43.149 START TEST accel_dualcast 00:20:43.149 ************************************ 00:20:43.149 01:50:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:20:43.149 01:50:42 -- accel/accel.sh@16 -- # local accel_opc 00:20:43.149 01:50:42 -- accel/accel.sh@17 -- # local accel_module 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # IFS=: 00:20:43.149 01:50:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:20:43.149 01:50:42 -- accel/accel.sh@19 -- # read -r var val 00:20:43.149 01:50:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:20:43.149 01:50:42 -- accel/accel.sh@12 -- # build_accel_config 00:20:43.149 01:50:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:43.149 01:50:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:43.149 01:50:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:43.149 01:50:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:43.149 01:50:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:43.149 01:50:42 -- accel/accel.sh@40 -- # local IFS=, 00:20:43.149 01:50:42 -- accel/accel.sh@41 -- # jq -r . 00:20:43.149 [2024-04-24 01:50:43.050287] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:43.149 [2024-04-24 01:50:43.050789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115159 ] 00:20:43.407 [2024-04-24 01:50:43.243019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.666 [2024-04-24 01:50:43.574100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=0x1 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=dualcast 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=software 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@22 -- # accel_module=software 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=32 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=32 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=1 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val=Yes 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:43.925 01:50:43 -- accel/accel.sh@20 -- # val= 00:20:43.925 01:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # IFS=: 00:20:43.925 01:50:43 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@20 -- # val= 00:20:46.458 01:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:46.458 01:50:46 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:20:46.458 01:50:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:46.458 00:20:46.458 real 0m3.189s 00:20:46.458 user 0m2.875s 00:20:46.458 sys 0m0.227s 00:20:46.458 01:50:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:46.458 01:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:46.458 ************************************ 00:20:46.458 END TEST accel_dualcast 00:20:46.458 ************************************ 00:20:46.458 01:50:46 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:20:46.458 01:50:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:46.458 01:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.458 01:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:46.458 ************************************ 00:20:46.458 START TEST accel_compare 00:20:46.458 ************************************ 00:20:46.458 01:50:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:20:46.458 01:50:46 -- accel/accel.sh@16 -- # local accel_opc 00:20:46.458 01:50:46 -- accel/accel.sh@17 -- # local accel_module 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # IFS=: 00:20:46.458 01:50:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:20:46.458 01:50:46 -- accel/accel.sh@19 -- # read -r var val 00:20:46.458 01:50:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:20:46.458 01:50:46 -- accel/accel.sh@12 -- # build_accel_config 00:20:46.458 01:50:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:46.458 01:50:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:46.458 01:50:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:46.458 01:50:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:46.458 01:50:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:46.458 01:50:46 -- accel/accel.sh@40 -- # local IFS=, 00:20:46.458 01:50:46 -- accel/accel.sh@41 -- # jq -r . 00:20:46.458 [2024-04-24 01:50:46.344582] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:46.458 [2024-04-24 01:50:46.344777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115220 ] 00:20:46.458 [2024-04-24 01:50:46.523306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.716 [2024-04-24 01:50:46.779316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=0x1 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=compare 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@23 -- # accel_opc=compare 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=software 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@22 -- # accel_module=software 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=32 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=32 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.283 01:50:47 -- accel/accel.sh@20 -- # val=1 00:20:47.283 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.283 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.284 01:50:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:47.284 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.284 01:50:47 -- accel/accel.sh@20 -- # val=Yes 00:20:47.284 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.284 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.284 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:47.284 01:50:47 -- accel/accel.sh@20 -- # val= 00:20:47.284 01:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # IFS=: 00:20:47.284 01:50:47 -- accel/accel.sh@19 -- # read -r var val 00:20:49.193 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.194 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.194 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.194 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.194 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.194 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.194 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.194 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.194 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.194 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.194 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.453 01:50:49 -- accel/accel.sh@20 -- # val= 00:20:49.453 01:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:20:49.453 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.453 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.453 01:50:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:49.453 01:50:49 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:20:49.453 01:50:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:49.453 00:20:49.453 real 0m3.018s 00:20:49.453 user 0m2.745s 00:20:49.453 sys 0m0.188s 00:20:49.453 01:50:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.453 ************************************ 00:20:49.453 END TEST accel_compare 00:20:49.453 ************************************ 00:20:49.453 01:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:49.453 01:50:49 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:20:49.453 01:50:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:49.453 01:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.453 01:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:49.453 ************************************ 00:20:49.453 START TEST accel_xor 00:20:49.453 ************************************ 00:20:49.453 01:50:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:20:49.453 01:50:49 -- accel/accel.sh@16 -- # local accel_opc 00:20:49.453 01:50:49 -- accel/accel.sh@17 -- # local accel_module 00:20:49.453 01:50:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:20:49.453 01:50:49 -- accel/accel.sh@19 -- # IFS=: 00:20:49.453 01:50:49 -- accel/accel.sh@19 -- # read -r var val 00:20:49.453 01:50:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:20:49.453 01:50:49 -- accel/accel.sh@12 -- # build_accel_config 00:20:49.453 01:50:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:49.453 01:50:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:49.453 01:50:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:49.453 01:50:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:49.453 01:50:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:49.453 01:50:49 -- accel/accel.sh@40 -- # local IFS=, 00:20:49.453 01:50:49 -- accel/accel.sh@41 -- # jq -r . 00:20:49.453 [2024-04-24 01:50:49.451185] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:49.453 [2024-04-24 01:50:49.451388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115287 ] 00:20:49.712 [2024-04-24 01:50:49.636310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.970 [2024-04-24 01:50:49.989707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=0x1 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=xor 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@23 -- # accel_opc=xor 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=2 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=software 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@22 -- # accel_module=software 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=32 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=32 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=1 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val=Yes 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:50.536 01:50:50 -- accel/accel.sh@20 -- # val= 00:20:50.536 01:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # IFS=: 00:20:50.536 01:50:50 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@20 -- # val= 00:20:52.477 01:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.477 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.477 01:50:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:52.477 01:50:52 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:20:52.477 01:50:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:52.477 00:20:52.477 real 0m3.123s 00:20:52.477 user 0m2.828s 00:20:52.477 sys 0m0.199s 00:20:52.477 01:50:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.477 ************************************ 00:20:52.477 END TEST accel_xor 00:20:52.477 ************************************ 00:20:52.477 01:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:52.736 01:50:52 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:20:52.736 01:50:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:20:52.736 01:50:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.736 01:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:52.736 ************************************ 00:20:52.736 START TEST accel_xor 00:20:52.736 ************************************ 00:20:52.736 01:50:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:20:52.736 01:50:52 -- accel/accel.sh@16 -- # local accel_opc 00:20:52.736 01:50:52 -- accel/accel.sh@17 -- # local accel_module 00:20:52.736 01:50:52 -- accel/accel.sh@19 -- # IFS=: 00:20:52.736 01:50:52 -- accel/accel.sh@19 -- # read -r var val 00:20:52.736 01:50:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:20:52.736 01:50:52 -- accel/accel.sh@12 -- # build_accel_config 00:20:52.736 01:50:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:20:52.736 01:50:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:52.736 01:50:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:52.736 01:50:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:52.736 01:50:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:52.736 01:50:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:52.736 01:50:52 -- accel/accel.sh@40 -- # local IFS=, 00:20:52.736 01:50:52 -- accel/accel.sh@41 -- # jq -r . 00:20:52.736 [2024-04-24 01:50:52.677003] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:52.736 [2024-04-24 01:50:52.677202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115356 ] 00:20:52.993 [2024-04-24 01:50:52.862324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.251 [2024-04-24 01:50:53.201895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=0x1 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=xor 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@23 -- # accel_opc=xor 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=3 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=software 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@22 -- # accel_module=software 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=32 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=32 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=1 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val=Yes 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:53.819 01:50:53 -- accel/accel.sh@20 -- # val= 00:20:53.819 01:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # IFS=: 00:20:53.819 01:50:53 -- accel/accel.sh@19 -- # read -r var val 00:20:55.719 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.719 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.719 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.719 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.719 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.719 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.719 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.719 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.719 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.719 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.719 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.720 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.720 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.720 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.720 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.720 01:50:55 -- accel/accel.sh@20 -- # val= 00:20:55.720 01:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.720 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.979 01:50:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:55.979 01:50:55 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:20:55.979 01:50:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:55.979 00:20:55.979 real 0m3.199s 00:20:55.979 user 0m2.896s 00:20:55.979 sys 0m0.221s 00:20:55.979 01:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.979 01:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:55.979 ************************************ 00:20:55.979 END TEST accel_xor 00:20:55.979 ************************************ 00:20:55.979 01:50:55 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:20:55.979 01:50:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:55.979 01:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.979 01:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:55.979 ************************************ 00:20:55.979 START TEST accel_dif_verify 00:20:55.979 ************************************ 00:20:55.979 01:50:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:20:55.979 01:50:55 -- accel/accel.sh@16 -- # local accel_opc 00:20:55.979 01:50:55 -- accel/accel.sh@17 -- # local accel_module 00:20:55.979 01:50:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:20:55.979 01:50:55 -- accel/accel.sh@19 -- # IFS=: 00:20:55.979 01:50:55 -- accel/accel.sh@19 -- # read -r var val 00:20:55.979 01:50:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:20:55.979 01:50:55 -- accel/accel.sh@12 -- # build_accel_config 00:20:55.979 01:50:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:55.979 01:50:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:55.979 01:50:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:55.979 01:50:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:55.979 01:50:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:55.979 01:50:55 -- accel/accel.sh@40 -- # local IFS=, 00:20:55.979 01:50:55 -- accel/accel.sh@41 -- # jq -r . 00:20:55.979 [2024-04-24 01:50:55.977293] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:55.979 [2024-04-24 01:50:55.977585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115421 ] 00:20:56.238 [2024-04-24 01:50:56.175000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.496 [2024-04-24 01:50:56.507704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.799 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:56.799 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:56.799 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:56.799 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:56.799 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:56.799 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:56.799 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:56.799 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=0x1 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=dif_verify 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val='512 bytes' 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val='8 bytes' 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=software 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@22 -- # accel_module=software 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=32 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=32 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=1 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val=No 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:57.058 01:50:56 -- accel/accel.sh@20 -- # val= 00:20:57.058 01:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # IFS=: 00:20:57.058 01:50:56 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:58.962 01:50:59 -- accel/accel.sh@20 -- # val= 00:20:58.962 01:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:58.962 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:59.220 01:50:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:20:59.220 01:50:59 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:20:59.220 01:50:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:59.220 00:20:59.220 real 0m3.151s 00:20:59.220 user 0m2.819s 00:20:59.220 sys 0m0.252s 00:20:59.220 ************************************ 00:20:59.220 END TEST accel_dif_verify 00:20:59.220 ************************************ 00:20:59.220 01:50:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:59.220 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 01:50:59 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:20:59.220 01:50:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:59.220 01:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:59.220 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 ************************************ 00:20:59.220 START TEST accel_dif_generate 00:20:59.220 ************************************ 00:20:59.220 01:50:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:20:59.220 01:50:59 -- accel/accel.sh@16 -- # local accel_opc 00:20:59.220 01:50:59 -- accel/accel.sh@17 -- # local accel_module 00:20:59.220 01:50:59 -- accel/accel.sh@19 -- # IFS=: 00:20:59.220 01:50:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:20:59.220 01:50:59 -- accel/accel.sh@19 -- # read -r var val 00:20:59.220 01:50:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:20:59.220 01:50:59 -- accel/accel.sh@12 -- # build_accel_config 00:20:59.220 01:50:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:20:59.220 01:50:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:20:59.220 01:50:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:20:59.220 01:50:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:20:59.220 01:50:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:20:59.220 01:50:59 -- accel/accel.sh@40 -- # local IFS=, 00:20:59.220 01:50:59 -- accel/accel.sh@41 -- # jq -r . 00:20:59.220 [2024-04-24 01:50:59.240249] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:20:59.220 [2024-04-24 01:50:59.240520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115495 ] 00:20:59.477 [2024-04-24 01:50:59.434409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.785 [2024-04-24 01:50:59.758976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=0x1 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=dif_generate 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val='512 bytes' 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val='8 bytes' 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=software 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@22 -- # accel_module=software 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=32 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=32 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=1 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val=No 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:00.077 01:51:00 -- accel/accel.sh@20 -- # val= 00:21:00.077 01:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # IFS=: 00:21:00.077 01:51:00 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@20 -- # val= 00:21:02.619 01:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:02.619 01:51:02 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:21:02.619 01:51:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:02.619 00:21:02.619 real 0m2.986s 00:21:02.619 user 0m2.694s 00:21:02.619 sys 0m0.204s 00:21:02.619 01:51:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:02.619 ************************************ 00:21:02.619 END TEST accel_dif_generate 00:21:02.619 ************************************ 00:21:02.619 01:51:02 -- common/autotest_common.sh@10 -- # set +x 00:21:02.619 01:51:02 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:21:02.619 01:51:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:02.619 01:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:02.619 01:51:02 -- common/autotest_common.sh@10 -- # set +x 00:21:02.619 ************************************ 00:21:02.619 START TEST accel_dif_generate_copy 00:21:02.619 ************************************ 00:21:02.619 01:51:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:21:02.619 01:51:02 -- accel/accel.sh@16 -- # local accel_opc 00:21:02.619 01:51:02 -- accel/accel.sh@17 -- # local accel_module 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # IFS=: 00:21:02.619 01:51:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:21:02.619 01:51:02 -- accel/accel.sh@19 -- # read -r var val 00:21:02.619 01:51:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:21:02.619 01:51:02 -- accel/accel.sh@12 -- # build_accel_config 00:21:02.619 01:51:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:02.619 01:51:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:02.619 01:51:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:02.619 01:51:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:02.619 01:51:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:02.619 01:51:02 -- accel/accel.sh@40 -- # local IFS=, 00:21:02.619 01:51:02 -- accel/accel.sh@41 -- # jq -r . 00:21:02.619 [2024-04-24 01:51:02.315024] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:02.619 [2024-04-24 01:51:02.315302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115565 ] 00:21:02.619 [2024-04-24 01:51:02.479045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.891 [2024-04-24 01:51:02.729775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.157 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.157 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=0x1 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=software 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@22 -- # accel_module=software 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=32 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=32 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=1 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val=No 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:03.158 01:51:03 -- accel/accel.sh@20 -- # val= 00:21:03.158 01:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # IFS=: 00:21:03.158 01:51:03 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:05.095 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.095 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.095 01:51:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:05.095 01:51:05 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:21:05.095 01:51:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:05.095 00:21:05.095 real 0m2.875s 00:21:05.095 user 0m2.571s 00:21:05.095 sys 0m0.214s 00:21:05.095 01:51:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.095 ************************************ 00:21:05.095 END TEST accel_dif_generate_copy 00:21:05.095 ************************************ 00:21:05.095 01:51:05 -- common/autotest_common.sh@10 -- # set +x 00:21:05.353 01:51:05 -- accel/accel.sh@115 -- # [[ y == y ]] 00:21:05.353 01:51:05 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:05.353 01:51:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:21:05.353 01:51:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.353 01:51:05 -- common/autotest_common.sh@10 -- # set +x 00:21:05.353 ************************************ 00:21:05.353 START TEST accel_comp 00:21:05.353 ************************************ 00:21:05.353 01:51:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:05.353 01:51:05 -- accel/accel.sh@16 -- # local accel_opc 00:21:05.353 01:51:05 -- accel/accel.sh@17 -- # local accel_module 00:21:05.353 01:51:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:05.353 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:05.353 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:05.353 01:51:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:05.353 01:51:05 -- accel/accel.sh@12 -- # build_accel_config 00:21:05.353 01:51:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:05.353 01:51:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:05.353 01:51:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:05.353 01:51:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:05.353 01:51:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:05.353 01:51:05 -- accel/accel.sh@40 -- # local IFS=, 00:21:05.353 01:51:05 -- accel/accel.sh@41 -- # jq -r . 00:21:05.353 [2024-04-24 01:51:05.277265] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:05.353 [2024-04-24 01:51:05.277760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115626 ] 00:21:05.353 [2024-04-24 01:51:05.437954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.624 [2024-04-24 01:51:05.696911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=0x1 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=compress 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@23 -- # accel_opc=compress 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=software 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@22 -- # accel_module=software 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=32 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=32 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=1 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val=No 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:06.200 01:51:05 -- accel/accel.sh@20 -- # val= 00:21:06.200 01:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:21:06.200 01:51:05 -- accel/accel.sh@19 -- # IFS=: 00:21:06.201 01:51:05 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.154 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:08.154 01:51:08 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:21:08.154 01:51:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:08.154 00:21:08.154 real 0m2.858s 00:21:08.154 user 0m2.613s 00:21:08.154 sys 0m0.156s 00:21:08.154 01:51:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.154 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:21:08.154 ************************************ 00:21:08.154 END TEST accel_comp 00:21:08.154 ************************************ 00:21:08.154 01:51:08 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.154 01:51:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:21:08.154 01:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.154 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:21:08.154 ************************************ 00:21:08.154 START TEST accel_decomp 00:21:08.154 ************************************ 00:21:08.154 01:51:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.154 01:51:08 -- accel/accel.sh@16 -- # local accel_opc 00:21:08.154 01:51:08 -- accel/accel.sh@17 -- # local accel_module 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.154 01:51:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.154 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.154 01:51:08 -- accel/accel.sh@12 -- # build_accel_config 00:21:08.154 01:51:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:08.154 01:51:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:08.154 01:51:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:08.154 01:51:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:08.154 01:51:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:08.154 01:51:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:08.154 01:51:08 -- accel/accel.sh@40 -- # local IFS=, 00:21:08.154 01:51:08 -- accel/accel.sh@41 -- # jq -r . 00:21:08.414 [2024-04-24 01:51:08.249498] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:08.414 [2024-04-24 01:51:08.249672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115687 ] 00:21:08.414 [2024-04-24 01:51:08.428240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.696 [2024-04-24 01:51:08.674714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=0x1 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=decompress 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=software 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@22 -- # accel_module=software 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=32 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=32 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=1 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val=Yes 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:08.977 01:51:08 -- accel/accel.sh@20 -- # val= 00:21:08.977 01:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # IFS=: 00:21:08.977 01:51:08 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:11.552 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:11.552 01:51:11 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:11.552 01:51:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:11.552 00:21:11.552 real 0m2.888s 00:21:11.552 user 0m2.607s 00:21:11.552 sys 0m0.201s 00:21:11.552 01:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:11.552 ************************************ 00:21:11.552 END TEST accel_decomp 00:21:11.552 01:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:11.552 ************************************ 00:21:11.552 01:51:11 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:11.552 01:51:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:21:11.552 01:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:11.552 01:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:11.552 ************************************ 00:21:11.552 START TEST accel_decmop_full 00:21:11.552 ************************************ 00:21:11.552 01:51:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:11.552 01:51:11 -- accel/accel.sh@16 -- # local accel_opc 00:21:11.552 01:51:11 -- accel/accel.sh@17 -- # local accel_module 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:11.552 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:11.552 01:51:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:11.552 01:51:11 -- accel/accel.sh@12 -- # build_accel_config 00:21:11.552 01:51:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:11.552 01:51:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:11.552 01:51:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:11.552 01:51:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:11.552 01:51:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:11.552 01:51:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:11.552 01:51:11 -- accel/accel.sh@40 -- # local IFS=, 00:21:11.552 01:51:11 -- accel/accel.sh@41 -- # jq -r . 00:21:11.552 [2024-04-24 01:51:11.228631] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:11.552 [2024-04-24 01:51:11.228830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115754 ] 00:21:11.552 [2024-04-24 01:51:11.409462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.811 [2024-04-24 01:51:11.660598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=0x1 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=decompress 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=software 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@22 -- # accel_module=software 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=32 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=32 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=1 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val=Yes 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.069 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.069 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.069 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.070 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.070 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:12.070 01:51:11 -- accel/accel.sh@20 -- # val= 00:21:12.070 01:51:11 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.070 01:51:11 -- accel/accel.sh@19 -- # IFS=: 00:21:12.070 01:51:11 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:14.604 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:14.604 01:51:14 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:14.604 ************************************ 00:21:14.604 END TEST accel_decmop_full 00:21:14.604 ************************************ 00:21:14.604 01:51:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:14.604 00:21:14.604 real 0m2.903s 00:21:14.604 user 0m2.614s 00:21:14.604 sys 0m0.202s 00:21:14.604 01:51:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:14.604 01:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:14.604 01:51:14 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:14.604 01:51:14 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:21:14.604 01:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.604 01:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:14.604 ************************************ 00:21:14.604 START TEST accel_decomp_mcore 00:21:14.604 ************************************ 00:21:14.604 01:51:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:14.604 01:51:14 -- accel/accel.sh@16 -- # local accel_opc 00:21:14.604 01:51:14 -- accel/accel.sh@17 -- # local accel_module 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:14.604 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:14.604 01:51:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:14.604 01:51:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:14.604 01:51:14 -- accel/accel.sh@12 -- # build_accel_config 00:21:14.604 01:51:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:14.604 01:51:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:14.604 01:51:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:14.604 01:51:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:14.604 01:51:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:14.604 01:51:14 -- accel/accel.sh@40 -- # local IFS=, 00:21:14.604 01:51:14 -- accel/accel.sh@41 -- # jq -r . 00:21:14.604 [2024-04-24 01:51:14.221850] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:14.605 [2024-04-24 01:51:14.222013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115817 ] 00:21:14.605 [2024-04-24 01:51:14.432981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.863 [2024-04-24 01:51:14.694492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.864 [2024-04-24 01:51:14.694603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.864 [2024-04-24 01:51:14.694750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.864 [2024-04-24 01:51:14.694751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.184 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.184 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.184 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.184 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.184 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.184 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.184 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.184 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.184 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.184 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.184 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=0xf 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=decompress 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=software 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@22 -- # accel_module=software 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=32 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=32 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=1 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val=Yes 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:15.185 01:51:14 -- accel/accel.sh@20 -- # val= 00:21:15.185 01:51:14 -- accel/accel.sh@21 -- # case "$var" in 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # IFS=: 00:21:15.185 01:51:14 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.087 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.087 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.087 01:51:17 -- accel/accel.sh@20 -- # val= 00:21:17.088 01:51:17 -- accel/accel.sh@21 -- # case "$var" in 00:21:17.088 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.088 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.088 01:51:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:17.088 01:51:17 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:17.088 01:51:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:17.088 00:21:17.088 real 0m2.959s 00:21:17.088 user 0m8.543s 00:21:17.088 sys 0m0.227s 00:21:17.088 01:51:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:17.088 01:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.088 ************************************ 00:21:17.088 END TEST accel_decomp_mcore 00:21:17.088 ************************************ 00:21:17.390 01:51:17 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:17.390 01:51:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:21:17.390 01:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:17.390 01:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.390 ************************************ 00:21:17.390 START TEST accel_decomp_full_mcore 00:21:17.390 ************************************ 00:21:17.390 01:51:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:17.390 01:51:17 -- accel/accel.sh@16 -- # local accel_opc 00:21:17.390 01:51:17 -- accel/accel.sh@17 -- # local accel_module 00:21:17.390 01:51:17 -- accel/accel.sh@19 -- # IFS=: 00:21:17.390 01:51:17 -- accel/accel.sh@19 -- # read -r var val 00:21:17.390 01:51:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:17.390 01:51:17 -- accel/accel.sh@12 -- # build_accel_config 00:21:17.390 01:51:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:17.390 01:51:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:17.390 01:51:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:17.390 01:51:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:17.390 01:51:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:17.390 01:51:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:17.390 01:51:17 -- accel/accel.sh@40 -- # local IFS=, 00:21:17.390 01:51:17 -- accel/accel.sh@41 -- # jq -r . 00:21:17.390 [2024-04-24 01:51:17.277613] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:17.390 [2024-04-24 01:51:17.277758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115887 ] 00:21:17.666 [2024-04-24 01:51:17.459064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.666 [2024-04-24 01:51:17.728100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.666 [2024-04-24 01:51:17.728256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.666 [2024-04-24 01:51:17.728321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.666 [2024-04-24 01:51:17.728325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=0xf 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=decompress 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=software 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@22 -- # accel_module=software 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=32 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=32 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=1 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val=Yes 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:18.242 01:51:18 -- accel/accel.sh@20 -- # val= 00:21:18.242 01:51:18 -- accel/accel.sh@21 -- # case "$var" in 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # IFS=: 00:21:18.242 01:51:18 -- accel/accel.sh@19 -- # read -r var val 00:21:20.144 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@20 -- # val= 00:21:20.145 01:51:20 -- accel/accel.sh@21 -- # case "$var" in 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.145 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.145 01:51:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:20.145 01:51:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:20.145 01:51:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:20.145 00:21:20.145 real 0m2.990s 00:21:20.145 user 0m8.719s 00:21:20.145 sys 0m0.216s 00:21:20.145 01:51:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.145 01:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:20.145 ************************************ 00:21:20.145 END TEST accel_decomp_full_mcore 00:21:20.145 ************************************ 00:21:20.403 01:51:20 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:21:20.403 01:51:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:21:20.403 01:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:20.403 01:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:20.403 ************************************ 00:21:20.403 START TEST accel_decomp_mthread 00:21:20.403 ************************************ 00:21:20.403 01:51:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:21:20.403 01:51:20 -- accel/accel.sh@16 -- # local accel_opc 00:21:20.403 01:51:20 -- accel/accel.sh@17 -- # local accel_module 00:21:20.403 01:51:20 -- accel/accel.sh@19 -- # IFS=: 00:21:20.403 01:51:20 -- accel/accel.sh@19 -- # read -r var val 00:21:20.403 01:51:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:21:20.403 01:51:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:21:20.403 01:51:20 -- accel/accel.sh@12 -- # build_accel_config 00:21:20.403 01:51:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:20.403 01:51:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:20.403 01:51:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:20.403 01:51:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:20.403 01:51:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:20.403 01:51:20 -- accel/accel.sh@40 -- # local IFS=, 00:21:20.403 01:51:20 -- accel/accel.sh@41 -- # jq -r . 00:21:20.403 [2024-04-24 01:51:20.376034] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:20.403 [2024-04-24 01:51:20.376225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115957 ] 00:21:20.661 [2024-04-24 01:51:20.552517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.920 [2024-04-24 01:51:20.806466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=0x1 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=decompress 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=software 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@22 -- # accel_module=software 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=32 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=32 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=2 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val=Yes 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:21.178 01:51:21 -- accel/accel.sh@20 -- # val= 00:21:21.178 01:51:21 -- accel/accel.sh@21 -- # case "$var" in 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # IFS=: 00:21:21.178 01:51:21 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@20 -- # val= 00:21:23.707 01:51:23 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:23.707 01:51:23 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:23.707 01:51:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:23.707 00:21:23.707 real 0m2.858s 00:21:23.707 user 0m2.603s 00:21:23.707 sys 0m0.169s 00:21:23.707 ************************************ 00:21:23.707 END TEST accel_decomp_mthread 00:21:23.707 ************************************ 00:21:23.707 01:51:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:23.707 01:51:23 -- common/autotest_common.sh@10 -- # set +x 00:21:23.707 01:51:23 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:21:23.707 01:51:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:21:23.707 01:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.707 01:51:23 -- common/autotest_common.sh@10 -- # set +x 00:21:23.707 ************************************ 00:21:23.707 START TEST accel_deomp_full_mthread 00:21:23.707 ************************************ 00:21:23.707 01:51:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:21:23.707 01:51:23 -- accel/accel.sh@16 -- # local accel_opc 00:21:23.707 01:51:23 -- accel/accel.sh@17 -- # local accel_module 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # IFS=: 00:21:23.707 01:51:23 -- accel/accel.sh@19 -- # read -r var val 00:21:23.707 01:51:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:21:23.707 01:51:23 -- accel/accel.sh@12 -- # build_accel_config 00:21:23.707 01:51:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:21:23.707 01:51:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:23.707 01:51:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:23.707 01:51:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:23.707 01:51:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:23.707 01:51:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:23.707 01:51:23 -- accel/accel.sh@40 -- # local IFS=, 00:21:23.707 01:51:23 -- accel/accel.sh@41 -- # jq -r . 00:21:23.707 [2024-04-24 01:51:23.335700] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:23.707 [2024-04-24 01:51:23.335896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116021 ] 00:21:23.707 [2024-04-24 01:51:23.525399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.707 [2024-04-24 01:51:23.768998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val=0x1 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val=decompress 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.966 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.966 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.966 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=software 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@22 -- # accel_module=software 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=32 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=32 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=2 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val=Yes 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:23.967 01:51:24 -- accel/accel.sh@20 -- # val= 00:21:23.967 01:51:24 -- accel/accel.sh@21 -- # case "$var" in 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # IFS=: 00:21:23.967 01:51:24 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@20 -- # val= 00:21:26.497 01:51:26 -- accel/accel.sh@21 -- # case "$var" in 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # IFS=: 00:21:26.497 01:51:26 -- accel/accel.sh@19 -- # read -r var val 00:21:26.497 01:51:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:26.497 01:51:26 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:26.497 01:51:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.497 00:21:26.497 real 0m2.915s 00:21:26.497 user 0m2.666s 00:21:26.497 sys 0m0.183s 00:21:26.497 01:51:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:26.497 ************************************ 00:21:26.497 END TEST accel_deomp_full_mthread 00:21:26.497 01:51:26 -- common/autotest_common.sh@10 -- # set +x 00:21:26.497 ************************************ 00:21:26.497 01:51:26 -- accel/accel.sh@124 -- # [[ n == y ]] 00:21:26.497 01:51:26 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:26.497 01:51:26 -- accel/accel.sh@137 -- # build_accel_config 00:21:26.497 01:51:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:26.497 01:51:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:26.497 01:51:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.497 01:51:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:26.497 01:51:26 -- common/autotest_common.sh@10 -- # set +x 00:21:26.497 01:51:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:26.497 01:51:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:26.497 01:51:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:26.497 01:51:26 -- accel/accel.sh@40 -- # local IFS=, 00:21:26.497 01:51:26 -- accel/accel.sh@41 -- # jq -r . 00:21:26.497 ************************************ 00:21:26.497 START TEST accel_dif_functional_tests 00:21:26.497 ************************************ 00:21:26.497 01:51:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:26.497 [2024-04-24 01:51:26.395423] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:26.497 [2024-04-24 01:51:26.395618] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116088 ] 00:21:26.756 [2024-04-24 01:51:26.588931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.014 [2024-04-24 01:51:26.857180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.014 [2024-04-24 01:51:26.857359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.014 [2024-04-24 01:51:26.857363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.273 00:21:27.274 00:21:27.274 CUnit - A unit testing framework for C - Version 2.1-3 00:21:27.274 http://cunit.sourceforge.net/ 00:21:27.274 00:21:27.274 00:21:27.274 Suite: accel_dif 00:21:27.274 Test: verify: DIF generated, GUARD check ...passed 00:21:27.274 Test: verify: DIF generated, APPTAG check ...passed 00:21:27.274 Test: verify: DIF generated, REFTAG check ...passed 00:21:27.274 Test: verify: DIF not generated, GUARD check ...[2024-04-24 01:51:27.225686] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:27.274 [2024-04-24 01:51:27.225802] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:27.274 passed 00:21:27.274 Test: verify: DIF not generated, APPTAG check ...passed 00:21:27.274 Test: verify: DIF not generated, REFTAG check ...passed 00:21:27.274 Test: verify: APPTAG correct, APPTAG check ...[2024-04-24 01:51:27.225879] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:27.274 [2024-04-24 01:51:27.225919] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:27.274 [2024-04-24 01:51:27.225958] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:27.274 [2024-04-24 01:51:27.226003] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:27.274 passed 00:21:27.274 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:21:27.274 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:21:27.274 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:21:27.274 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:21:27.274 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 01:51:27.226125] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:21:27.274 [2024-04-24 01:51:27.226346] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:21:27.274 passed 00:21:27.274 Test: generate copy: DIF generated, GUARD check ...passed 00:21:27.274 Test: generate copy: DIF generated, APTTAG check ...passed 00:21:27.274 Test: generate copy: DIF generated, REFTAG check ...passed 00:21:27.274 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:21:27.274 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:21:27.274 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:21:27.274 Test: generate copy: iovecs-len validate ...passed 00:21:27.274 Test: generate copy: buffer alignment validate ...passed 00:21:27.274 00:21:27.274 Run Summary: Type Total Ran Passed Failed Inactive 00:21:27.274 suites 1 1 n/a 0 0 00:21:27.274 tests 20 20 20 0 0 00:21:27.274 asserts 204 204 204 0 n/a 00:21:27.274 00:21:27.274 Elapsed time = 0.001 seconds 00:21:27.274 [2024-04-24 01:51:27.226830] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:21:29.177 00:21:29.177 real 0m2.503s 00:21:29.177 user 0m4.899s 00:21:29.177 sys 0m0.256s 00:21:29.177 01:51:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.177 01:51:28 -- common/autotest_common.sh@10 -- # set +x 00:21:29.177 ************************************ 00:21:29.177 END TEST accel_dif_functional_tests 00:21:29.177 ************************************ 00:21:29.177 00:21:29.177 real 1m12.585s 00:21:29.177 user 1m19.278s 00:21:29.177 sys 0m6.757s 00:21:29.177 01:51:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.177 01:51:28 -- common/autotest_common.sh@10 -- # set +x 00:21:29.177 ************************************ 00:21:29.177 END TEST accel 00:21:29.177 ************************************ 00:21:29.177 01:51:28 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:21:29.177 01:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:29.177 01:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:29.177 01:51:28 -- common/autotest_common.sh@10 -- # set +x 00:21:29.177 ************************************ 00:21:29.177 START TEST accel_rpc 00:21:29.177 ************************************ 00:21:29.177 01:51:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:21:29.177 * Looking for test storage... 00:21:29.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:21:29.177 01:51:29 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:29.177 01:51:29 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=116199 00:21:29.177 01:51:29 -- accel/accel_rpc.sh@15 -- # waitforlisten 116199 00:21:29.177 01:51:29 -- common/autotest_common.sh@817 -- # '[' -z 116199 ']' 00:21:29.177 01:51:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.177 01:51:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.177 01:51:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.177 01:51:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.177 01:51:29 -- common/autotest_common.sh@10 -- # set +x 00:21:29.177 01:51:29 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:29.177 [2024-04-24 01:51:29.135762] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:29.177 [2024-04-24 01:51:29.135900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116199 ] 00:21:29.436 [2024-04-24 01:51:29.297807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.695 [2024-04-24 01:51:29.552368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.262 01:51:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.262 01:51:30 -- common/autotest_common.sh@850 -- # return 0 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:21:30.262 01:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:30.262 01:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.262 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.262 ************************************ 00:21:30.262 START TEST accel_assign_opcode 00:21:30.262 ************************************ 00:21:30.262 01:51:30 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:21:30.262 01:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.262 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.262 [2024-04-24 01:51:30.157246] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:21:30.262 01:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:21:30.262 01:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.262 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.262 [2024-04-24 01:51:30.165162] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:21:30.262 01:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.262 01:51:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:21:30.262 01:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.262 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 01:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.197 01:51:30 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:21:31.197 01:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.197 01:51:30 -- accel/accel_rpc.sh@42 -- # grep software 00:21:31.197 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 01:51:30 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:21:31.197 01:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.197 software 00:21:31.197 00:21:31.197 real 0m0.845s 00:21:31.197 user 0m0.052s 00:21:31.197 sys 0m0.011s 00:21:31.197 01:51:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:31.197 01:51:30 -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 ************************************ 00:21:31.197 END TEST accel_assign_opcode 00:21:31.197 ************************************ 00:21:31.197 01:51:31 -- accel/accel_rpc.sh@55 -- # killprocess 116199 00:21:31.197 01:51:31 -- common/autotest_common.sh@936 -- # '[' -z 116199 ']' 00:21:31.197 01:51:31 -- common/autotest_common.sh@940 -- # kill -0 116199 00:21:31.197 01:51:31 -- common/autotest_common.sh@941 -- # uname 00:21:31.197 01:51:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:31.197 01:51:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116199 00:21:31.197 01:51:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:31.197 01:51:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:31.197 killing process with pid 116199 00:21:31.197 01:51:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116199' 00:21:31.197 01:51:31 -- common/autotest_common.sh@955 -- # kill 116199 00:21:31.197 01:51:31 -- common/autotest_common.sh@960 -- # wait 116199 00:21:33.729 ************************************ 00:21:33.729 END TEST accel_rpc 00:21:33.729 ************************************ 00:21:33.729 00:21:33.729 real 0m4.617s 00:21:33.729 user 0m4.714s 00:21:33.729 sys 0m0.508s 00:21:33.729 01:51:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:33.729 01:51:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.729 01:51:33 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:33.729 01:51:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:33.729 01:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.729 01:51:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.729 ************************************ 00:21:33.729 START TEST app_cmdline 00:21:33.729 ************************************ 00:21:33.729 01:51:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:33.729 * Looking for test storage... 00:21:33.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:33.729 01:51:33 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:21:33.729 01:51:33 -- app/cmdline.sh@17 -- # spdk_tgt_pid=116342 00:21:33.729 01:51:33 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:21:33.729 01:51:33 -- app/cmdline.sh@18 -- # waitforlisten 116342 00:21:33.729 01:51:33 -- common/autotest_common.sh@817 -- # '[' -z 116342 ']' 00:21:33.729 01:51:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.729 01:51:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:33.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.729 01:51:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.729 01:51:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:33.729 01:51:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.988 [2024-04-24 01:51:33.869007] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:33.988 [2024-04-24 01:51:33.869201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116342 ] 00:21:33.988 [2024-04-24 01:51:34.048621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.555 [2024-04-24 01:51:34.334099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.174 01:51:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.174 01:51:35 -- common/autotest_common.sh@850 -- # return 0 00:21:35.174 01:51:35 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:21:35.431 { 00:21:35.431 "version": "SPDK v24.05-pre git sha1 3f3de12cc", 00:21:35.431 "fields": { 00:21:35.431 "major": 24, 00:21:35.431 "minor": 5, 00:21:35.431 "patch": 0, 00:21:35.431 "suffix": "-pre", 00:21:35.431 "commit": "3f3de12cc" 00:21:35.431 } 00:21:35.431 } 00:21:35.431 01:51:35 -- app/cmdline.sh@22 -- # expected_methods=() 00:21:35.431 01:51:35 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:21:35.431 01:51:35 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:21:35.431 01:51:35 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:21:35.432 01:51:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:21:35.432 01:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.432 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:21:35.432 01:51:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:21:35.432 01:51:35 -- app/cmdline.sh@26 -- # sort 00:21:35.432 01:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.691 01:51:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:21:35.691 01:51:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:21:35.691 01:51:35 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:35.691 01:51:35 -- common/autotest_common.sh@638 -- # local es=0 00:21:35.691 01:51:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:35.691 01:51:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.691 01:51:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:35.691 01:51:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.691 01:51:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:35.691 01:51:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.691 01:51:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:35.691 01:51:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.691 01:51:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:35.691 01:51:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:35.691 request: 00:21:35.691 { 00:21:35.691 "method": "env_dpdk_get_mem_stats", 00:21:35.691 "req_id": 1 00:21:35.691 } 00:21:35.691 Got JSON-RPC error response 00:21:35.691 response: 00:21:35.691 { 00:21:35.691 "code": -32601, 00:21:35.691 "message": "Method not found" 00:21:35.691 } 00:21:35.949 01:51:35 -- common/autotest_common.sh@641 -- # es=1 00:21:35.949 01:51:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:35.949 01:51:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:35.949 01:51:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:35.949 01:51:35 -- app/cmdline.sh@1 -- # killprocess 116342 00:21:35.949 01:51:35 -- common/autotest_common.sh@936 -- # '[' -z 116342 ']' 00:21:35.949 01:51:35 -- common/autotest_common.sh@940 -- # kill -0 116342 00:21:35.949 01:51:35 -- common/autotest_common.sh@941 -- # uname 00:21:35.949 01:51:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.949 01:51:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116342 00:21:35.949 01:51:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.949 01:51:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.949 01:51:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116342' 00:21:35.949 killing process with pid 116342 00:21:35.949 01:51:35 -- common/autotest_common.sh@955 -- # kill 116342 00:21:35.949 01:51:35 -- common/autotest_common.sh@960 -- # wait 116342 00:21:38.482 ************************************ 00:21:38.482 END TEST app_cmdline 00:21:38.482 ************************************ 00:21:38.482 00:21:38.482 real 0m4.623s 00:21:38.482 user 0m5.019s 00:21:38.482 sys 0m0.604s 00:21:38.482 01:51:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:38.482 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.482 01:51:38 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:38.482 01:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:38.482 01:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.482 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.482 ************************************ 00:21:38.482 START TEST version 00:21:38.482 ************************************ 00:21:38.482 01:51:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:38.482 * Looking for test storage... 00:21:38.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:38.482 01:51:38 -- app/version.sh@17 -- # get_header_version major 00:21:38.482 01:51:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:38.482 01:51:38 -- app/version.sh@14 -- # cut -f2 00:21:38.482 01:51:38 -- app/version.sh@14 -- # tr -d '"' 00:21:38.482 01:51:38 -- app/version.sh@17 -- # major=24 00:21:38.482 01:51:38 -- app/version.sh@18 -- # get_header_version minor 00:21:38.482 01:51:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:38.482 01:51:38 -- app/version.sh@14 -- # cut -f2 00:21:38.482 01:51:38 -- app/version.sh@14 -- # tr -d '"' 00:21:38.482 01:51:38 -- app/version.sh@18 -- # minor=5 00:21:38.482 01:51:38 -- app/version.sh@19 -- # get_header_version patch 00:21:38.482 01:51:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:38.482 01:51:38 -- app/version.sh@14 -- # cut -f2 00:21:38.482 01:51:38 -- app/version.sh@14 -- # tr -d '"' 00:21:38.482 01:51:38 -- app/version.sh@19 -- # patch=0 00:21:38.482 01:51:38 -- app/version.sh@20 -- # get_header_version suffix 00:21:38.482 01:51:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:38.482 01:51:38 -- app/version.sh@14 -- # cut -f2 00:21:38.482 01:51:38 -- app/version.sh@14 -- # tr -d '"' 00:21:38.482 01:51:38 -- app/version.sh@20 -- # suffix=-pre 00:21:38.482 01:51:38 -- app/version.sh@22 -- # version=24.5 00:21:38.482 01:51:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:21:38.482 01:51:38 -- app/version.sh@28 -- # version=24.5rc0 00:21:38.482 01:51:38 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:21:38.482 01:51:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:21:38.741 01:51:38 -- app/version.sh@30 -- # py_version=24.5rc0 00:21:38.741 01:51:38 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:21:38.741 ************************************ 00:21:38.741 END TEST version 00:21:38.741 ************************************ 00:21:38.741 00:21:38.741 real 0m0.183s 00:21:38.741 user 0m0.126s 00:21:38.741 sys 0m0.099s 00:21:38.741 01:51:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:38.741 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.741 01:51:38 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:21:38.741 01:51:38 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:21:38.741 01:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:38.741 01:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.741 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.741 ************************************ 00:21:38.741 START TEST blockdev_general 00:21:38.741 ************************************ 00:21:38.741 01:51:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:21:38.741 * Looking for test storage... 00:21:38.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:38.741 01:51:38 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:38.741 01:51:38 -- bdev/nbd_common.sh@6 -- # set -e 00:21:38.741 01:51:38 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:38.741 01:51:38 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:38.741 01:51:38 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:38.741 01:51:38 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:38.741 01:51:38 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:38.741 01:51:38 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:38.741 01:51:38 -- bdev/blockdev.sh@20 -- # : 00:21:38.741 01:51:38 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:21:38.741 01:51:38 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:21:38.741 01:51:38 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:21:38.741 01:51:38 -- bdev/blockdev.sh@674 -- # uname -s 00:21:38.741 01:51:38 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:21:38.741 01:51:38 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:21:38.741 01:51:38 -- bdev/blockdev.sh@682 -- # test_type=bdev 00:21:38.741 01:51:38 -- bdev/blockdev.sh@683 -- # crypto_device= 00:21:38.741 01:51:38 -- bdev/blockdev.sh@684 -- # dek= 00:21:38.741 01:51:38 -- bdev/blockdev.sh@685 -- # env_ctx= 00:21:38.741 01:51:38 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:21:38.742 01:51:38 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:21:38.742 01:51:38 -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:21:38.742 01:51:38 -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:21:38.742 01:51:38 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:21:38.742 01:51:38 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116540 00:21:38.742 01:51:38 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:38.742 01:51:38 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:21:38.742 01:51:38 -- bdev/blockdev.sh@49 -- # waitforlisten 116540 00:21:38.742 01:51:38 -- common/autotest_common.sh@817 -- # '[' -z 116540 ']' 00:21:38.742 01:51:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.742 01:51:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:38.742 01:51:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.742 01:51:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:38.742 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:39.000 [2024-04-24 01:51:38.913349] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:39.000 [2024-04-24 01:51:38.913543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116540 ] 00:21:39.258 [2024-04-24 01:51:39.090587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.258 [2024-04-24 01:51:39.291710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.822 01:51:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:39.823 01:51:39 -- common/autotest_common.sh@850 -- # return 0 00:21:39.823 01:51:39 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:21:39.823 01:51:39 -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:21:39.823 01:51:39 -- bdev/blockdev.sh@53 -- # rpc_cmd 00:21:39.823 01:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.823 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.802 [2024-04-24 01:51:40.694451] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:40.802 [2024-04-24 01:51:40.695185] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:40.802 00:21:40.802 [2024-04-24 01:51:40.702441] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:40.802 [2024-04-24 01:51:40.702619] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:40.802 00:21:40.802 Malloc0 00:21:40.802 Malloc1 00:21:40.802 Malloc2 00:21:40.802 Malloc3 00:21:41.060 Malloc4 00:21:41.060 Malloc5 00:21:41.060 Malloc6 00:21:41.060 Malloc7 00:21:41.060 Malloc8 00:21:41.060 Malloc9 00:21:41.060 [2024-04-24 01:51:41.137056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:41.060 [2024-04-24 01:51:41.137230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.060 [2024-04-24 01:51:41.137303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:41.060 [2024-04-24 01:51:41.137397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.060 [2024-04-24 01:51:41.139869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.060 [2024-04-24 01:51:41.140061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:21:41.060 TestPT 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:21:41.319 5000+0 records in 00:21:41.319 5000+0 records out 00:21:41.319 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0340781 s, 300 MB/s 00:21:41.319 01:51:41 -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 AIO0 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@740 -- # cat 00:21:41.319 01:51:41 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.319 01:51:41 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:21:41.319 01:51:41 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:21:41.319 01:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.319 01:51:41 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:21:41.319 01:51:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.579 01:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.579 01:51:41 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:21:41.579 01:51:41 -- bdev/blockdev.sh@749 -- # jq -r .name 00:21:41.580 01:51:41 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d2782d55-130f-433a-af55-11308d0992f5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d2782d55-130f-433a-af55-11308d0992f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "932c4d42-a62c-52ce-bc87-b9490a25fa1d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "932c4d42-a62c-52ce-bc87-b9490a25fa1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd632f3e-5739-5d3c-b09e-e022279ed98f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd632f3e-5739-5d3c-b09e-e022279ed98f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d15989a3-027f-5373-86c3-244b40b0fc25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d15989a3-027f-5373-86c3-244b40b0fc25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "34ba9237-5a14-5851-abfc-bb718efff827"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34ba9237-5a14-5851-abfc-bb718efff827",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f1547b23-7469-5790-b43f-c1e1998cf72c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1547b23-7469-5790-b43f-c1e1998cf72c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "829a07bd-db64-54cf-bf79-e1992afd56b9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "829a07bd-db64-54cf-bf79-e1992afd56b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "afddec8a-fc37-59c6-87b8-a23e1659fe1c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "afddec8a-fc37-59c6-87b8-a23e1659fe1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0f82aebd-f805-5c7d-8ecb-d415872a1316"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f82aebd-f805-5c7d-8ecb-d415872a1316",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "15baaa49-2630-5956-9d32-21f3596b5143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15baaa49-2630-5956-9d32-21f3596b5143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b6c1693a-813d-5742-8cca-a222db95ce49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b6c1693a-813d-5742-8cca-a222db95ce49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d04716a6-e6b3-4e66-b055-1af08ac08208"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "1468de49-239a-4061-af7b-d581de7cf39b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f33ddecb-2f3a-4953-9591-d953fe0cc207",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1367b197-07fa-473f-b0e9-d0bf67a48bc3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "8dd46316-4925-4aa2-a7eb-f4aa679e74b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c8ec7219-73ea-434e-bb5b-9fe929613a11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "530d2e86-563a-4e91-9e83-33c54850a7f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "eda18a5a-b851-4d1d-ab01-8e709838f724",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7bce8a4e-372f-492d-b22d-af3c6e382dd3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:21:41.580 01:51:41 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:21:41.580 01:51:41 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:21:41.580 01:51:41 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:21:41.580 01:51:41 -- bdev/blockdev.sh@754 -- # killprocess 116540 00:21:41.580 01:51:41 -- common/autotest_common.sh@936 -- # '[' -z 116540 ']' 00:21:41.580 01:51:41 -- common/autotest_common.sh@940 -- # kill -0 116540 00:21:41.580 01:51:41 -- common/autotest_common.sh@941 -- # uname 00:21:41.580 01:51:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.580 01:51:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116540 00:21:41.580 01:51:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:41.580 01:51:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:41.580 01:51:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116540' 00:21:41.580 killing process with pid 116540 00:21:41.580 01:51:41 -- common/autotest_common.sh@955 -- # kill 116540 00:21:41.580 01:51:41 -- common/autotest_common.sh@960 -- # wait 116540 00:21:45.767 01:51:45 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:45.767 01:51:45 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:21:45.767 01:51:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:21:45.767 01:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.767 01:51:45 -- common/autotest_common.sh@10 -- # set +x 00:21:45.767 ************************************ 00:21:45.767 START TEST bdev_hello_world 00:21:45.767 ************************************ 00:21:45.767 01:51:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:21:45.767 [2024-04-24 01:51:45.599609] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:45.767 [2024-04-24 01:51:45.599805] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116655 ] 00:21:45.767 [2024-04-24 01:51:45.781655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.026 [2024-04-24 01:51:46.079443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.593 [2024-04-24 01:51:46.511103] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:46.593 [2024-04-24 01:51:46.511407] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:46.593 [2024-04-24 01:51:46.519041] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:46.593 [2024-04-24 01:51:46.519211] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:46.593 [2024-04-24 01:51:46.527062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:46.593 [2024-04-24 01:51:46.527214] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:21:46.593 [2024-04-24 01:51:46.527353] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:21:46.927 [2024-04-24 01:51:46.739320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:46.927 [2024-04-24 01:51:46.739634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.927 [2024-04-24 01:51:46.739705] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:46.927 [2024-04-24 01:51:46.739808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.927 [2024-04-24 01:51:46.742362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.927 [2024-04-24 01:51:46.742542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:21:47.194 [2024-04-24 01:51:47.088891] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:47.194 [2024-04-24 01:51:47.089162] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:21:47.194 [2024-04-24 01:51:47.089289] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:47.194 [2024-04-24 01:51:47.089476] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:47.194 [2024-04-24 01:51:47.089610] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:47.194 [2024-04-24 01:51:47.089792] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:47.194 [2024-04-24 01:51:47.089924] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:47.194 00:21:47.194 [2024-04-24 01:51:47.090042] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:49.725 00:21:49.725 real 0m3.854s 00:21:49.725 user 0m3.293s 00:21:49.725 sys 0m0.400s 00:21:49.725 01:51:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:49.725 01:51:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.725 ************************************ 00:21:49.725 END TEST bdev_hello_world 00:21:49.725 ************************************ 00:21:49.725 01:51:49 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:21:49.725 01:51:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:49.725 01:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:49.725 01:51:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.725 ************************************ 00:21:49.725 START TEST bdev_bounds 00:21:49.725 ************************************ 00:21:49.725 01:51:49 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:21:49.725 01:51:49 -- bdev/blockdev.sh@290 -- # bdevio_pid=116733 00:21:49.725 01:51:49 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:49.725 01:51:49 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:49.725 01:51:49 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 116733' 00:21:49.725 Process bdevio pid: 116733 00:21:49.725 01:51:49 -- bdev/blockdev.sh@293 -- # waitforlisten 116733 00:21:49.725 01:51:49 -- common/autotest_common.sh@817 -- # '[' -z 116733 ']' 00:21:49.725 01:51:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.725 01:51:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.725 01:51:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.725 01:51:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.725 01:51:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.725 [2024-04-24 01:51:49.545523] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:49.725 [2024-04-24 01:51:49.545707] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116733 ] 00:21:49.725 [2024-04-24 01:51:49.748356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.984 [2024-04-24 01:51:50.022834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.984 [2024-04-24 01:51:50.022922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.984 [2024-04-24 01:51:50.022923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.551 [2024-04-24 01:51:50.518311] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:50.551 [2024-04-24 01:51:50.518623] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:50.551 [2024-04-24 01:51:50.526249] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:50.551 [2024-04-24 01:51:50.526475] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:50.551 [2024-04-24 01:51:50.534281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:50.551 [2024-04-24 01:51:50.534500] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:21:50.551 [2024-04-24 01:51:50.534618] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:21:50.810 [2024-04-24 01:51:50.785779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:50.810 [2024-04-24 01:51:50.786105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.810 [2024-04-24 01:51:50.786254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:50.810 [2024-04-24 01:51:50.786367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.810 [2024-04-24 01:51:50.789247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.810 [2024-04-24 01:51:50.789421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:21:51.377 01:51:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:51.377 01:51:51 -- common/autotest_common.sh@850 -- # return 0 00:21:51.377 01:51:51 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:51.377 I/O targets: 00:21:51.377 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:21:51.377 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:21:51.377 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:21:51.377 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:21:51.377 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:21:51.377 raid0: 131072 blocks of 512 bytes (64 MiB) 00:21:51.377 concat0: 131072 blocks of 512 bytes (64 MiB) 00:21:51.377 raid1: 65536 blocks of 512 bytes (32 MiB) 00:21:51.377 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:21:51.377 00:21:51.377 00:21:51.377 CUnit - A unit testing framework for C - Version 2.1-3 00:21:51.377 http://cunit.sourceforge.net/ 00:21:51.377 00:21:51.378 00:21:51.378 Suite: bdevio tests on: AIO0 00:21:51.378 Test: blockdev write read block ...passed 00:21:51.378 Test: blockdev write zeroes read block ...passed 00:21:51.378 Test: blockdev write zeroes read no split ...passed 00:21:51.378 Test: blockdev write zeroes read split ...passed 00:21:51.378 Test: blockdev write zeroes read split partial ...passed 00:21:51.378 Test: blockdev reset ...passed 00:21:51.378 Test: blockdev write read 8 blocks ...passed 00:21:51.378 Test: blockdev write read size > 128k ...passed 00:21:51.378 Test: blockdev write read invalid size ...passed 00:21:51.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.636 Test: blockdev write read max offset ...passed 00:21:51.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.636 Test: blockdev writev readv 8 blocks ...passed 00:21:51.636 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.636 Test: blockdev writev readv block ...passed 00:21:51.636 Test: blockdev writev readv size > 128k ...passed 00:21:51.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.636 Test: blockdev comparev and writev ...passed 00:21:51.636 Test: blockdev nvme passthru rw ...passed 00:21:51.636 Test: blockdev nvme passthru vendor specific ...passed 00:21:51.636 Test: blockdev nvme admin passthru ...passed 00:21:51.636 Test: blockdev copy ...passed 00:21:51.636 Suite: bdevio tests on: raid1 00:21:51.636 Test: blockdev write read block ...passed 00:21:51.636 Test: blockdev write zeroes read block ...passed 00:21:51.636 Test: blockdev write zeroes read no split ...passed 00:21:51.636 Test: blockdev write zeroes read split ...passed 00:21:51.636 Test: blockdev write zeroes read split partial ...passed 00:21:51.636 Test: blockdev reset ...passed 00:21:51.636 Test: blockdev write read 8 blocks ...passed 00:21:51.636 Test: blockdev write read size > 128k ...passed 00:21:51.636 Test: blockdev write read invalid size ...passed 00:21:51.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.636 Test: blockdev write read max offset ...passed 00:21:51.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.636 Test: blockdev writev readv 8 blocks ...passed 00:21:51.636 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.636 Test: blockdev writev readv block ...passed 00:21:51.636 Test: blockdev writev readv size > 128k ...passed 00:21:51.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.636 Test: blockdev comparev and writev ...passed 00:21:51.636 Test: blockdev nvme passthru rw ...passed 00:21:51.636 Test: blockdev nvme passthru vendor specific ...passed 00:21:51.636 Test: blockdev nvme admin passthru ...passed 00:21:51.636 Test: blockdev copy ...passed 00:21:51.636 Suite: bdevio tests on: concat0 00:21:51.636 Test: blockdev write read block ...passed 00:21:51.636 Test: blockdev write zeroes read block ...passed 00:21:51.636 Test: blockdev write zeroes read no split ...passed 00:21:51.636 Test: blockdev write zeroes read split ...passed 00:21:51.636 Test: blockdev write zeroes read split partial ...passed 00:21:51.636 Test: blockdev reset ...passed 00:21:51.636 Test: blockdev write read 8 blocks ...passed 00:21:51.636 Test: blockdev write read size > 128k ...passed 00:21:51.636 Test: blockdev write read invalid size ...passed 00:21:51.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.636 Test: blockdev write read max offset ...passed 00:21:51.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.636 Test: blockdev writev readv 8 blocks ...passed 00:21:51.636 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.636 Test: blockdev writev readv block ...passed 00:21:51.636 Test: blockdev writev readv size > 128k ...passed 00:21:51.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.636 Test: blockdev comparev and writev ...passed 00:21:51.636 Test: blockdev nvme passthru rw ...passed 00:21:51.636 Test: blockdev nvme passthru vendor specific ...passed 00:21:51.636 Test: blockdev nvme admin passthru ...passed 00:21:51.636 Test: blockdev copy ...passed 00:21:51.636 Suite: bdevio tests on: raid0 00:21:51.636 Test: blockdev write read block ...passed 00:21:51.636 Test: blockdev write zeroes read block ...passed 00:21:51.636 Test: blockdev write zeroes read no split ...passed 00:21:51.895 Test: blockdev write zeroes read split ...passed 00:21:51.895 Test: blockdev write zeroes read split partial ...passed 00:21:51.895 Test: blockdev reset ...passed 00:21:51.895 Test: blockdev write read 8 blocks ...passed 00:21:51.895 Test: blockdev write read size > 128k ...passed 00:21:51.895 Test: blockdev write read invalid size ...passed 00:21:51.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.895 Test: blockdev write read max offset ...passed 00:21:51.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.895 Test: blockdev writev readv 8 blocks ...passed 00:21:51.895 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.895 Test: blockdev writev readv block ...passed 00:21:51.895 Test: blockdev writev readv size > 128k ...passed 00:21:51.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.895 Test: blockdev comparev and writev ...passed 00:21:51.895 Test: blockdev nvme passthru rw ...passed 00:21:51.895 Test: blockdev nvme passthru vendor specific ...passed 00:21:51.895 Test: blockdev nvme admin passthru ...passed 00:21:51.895 Test: blockdev copy ...passed 00:21:51.895 Suite: bdevio tests on: TestPT 00:21:51.895 Test: blockdev write read block ...passed 00:21:51.895 Test: blockdev write zeroes read block ...passed 00:21:51.895 Test: blockdev write zeroes read no split ...passed 00:21:51.895 Test: blockdev write zeroes read split ...passed 00:21:51.895 Test: blockdev write zeroes read split partial ...passed 00:21:51.895 Test: blockdev reset ...passed 00:21:51.895 Test: blockdev write read 8 blocks ...passed 00:21:51.895 Test: blockdev write read size > 128k ...passed 00:21:51.895 Test: blockdev write read invalid size ...passed 00:21:51.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.895 Test: blockdev write read max offset ...passed 00:21:51.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.895 Test: blockdev writev readv 8 blocks ...passed 00:21:51.895 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.895 Test: blockdev writev readv block ...passed 00:21:51.895 Test: blockdev writev readv size > 128k ...passed 00:21:51.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.895 Test: blockdev comparev and writev ...passed 00:21:51.895 Test: blockdev nvme passthru rw ...passed 00:21:51.895 Test: blockdev nvme passthru vendor specific ...passed 00:21:51.895 Test: blockdev nvme admin passthru ...passed 00:21:51.895 Test: blockdev copy ...passed 00:21:51.895 Suite: bdevio tests on: Malloc2p7 00:21:51.895 Test: blockdev write read block ...passed 00:21:51.895 Test: blockdev write zeroes read block ...passed 00:21:51.895 Test: blockdev write zeroes read no split ...passed 00:21:51.895 Test: blockdev write zeroes read split ...passed 00:21:52.154 Test: blockdev write zeroes read split partial ...passed 00:21:52.154 Test: blockdev reset ...passed 00:21:52.154 Test: blockdev write read 8 blocks ...passed 00:21:52.154 Test: blockdev write read size > 128k ...passed 00:21:52.154 Test: blockdev write read invalid size ...passed 00:21:52.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.154 Test: blockdev write read max offset ...passed 00:21:52.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.154 Test: blockdev writev readv 8 blocks ...passed 00:21:52.154 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.154 Test: blockdev writev readv block ...passed 00:21:52.154 Test: blockdev writev readv size > 128k ...passed 00:21:52.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.154 Test: blockdev comparev and writev ...passed 00:21:52.154 Test: blockdev nvme passthru rw ...passed 00:21:52.154 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.154 Test: blockdev nvme admin passthru ...passed 00:21:52.154 Test: blockdev copy ...passed 00:21:52.154 Suite: bdevio tests on: Malloc2p6 00:21:52.154 Test: blockdev write read block ...passed 00:21:52.154 Test: blockdev write zeroes read block ...passed 00:21:52.154 Test: blockdev write zeroes read no split ...passed 00:21:52.154 Test: blockdev write zeroes read split ...passed 00:21:52.154 Test: blockdev write zeroes read split partial ...passed 00:21:52.154 Test: blockdev reset ...passed 00:21:52.154 Test: blockdev write read 8 blocks ...passed 00:21:52.154 Test: blockdev write read size > 128k ...passed 00:21:52.154 Test: blockdev write read invalid size ...passed 00:21:52.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.154 Test: blockdev write read max offset ...passed 00:21:52.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.154 Test: blockdev writev readv 8 blocks ...passed 00:21:52.154 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.154 Test: blockdev writev readv block ...passed 00:21:52.154 Test: blockdev writev readv size > 128k ...passed 00:21:52.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.154 Test: blockdev comparev and writev ...passed 00:21:52.154 Test: blockdev nvme passthru rw ...passed 00:21:52.154 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.154 Test: blockdev nvme admin passthru ...passed 00:21:52.154 Test: blockdev copy ...passed 00:21:52.154 Suite: bdevio tests on: Malloc2p5 00:21:52.154 Test: blockdev write read block ...passed 00:21:52.154 Test: blockdev write zeroes read block ...passed 00:21:52.154 Test: blockdev write zeroes read no split ...passed 00:21:52.154 Test: blockdev write zeroes read split ...passed 00:21:52.154 Test: blockdev write zeroes read split partial ...passed 00:21:52.154 Test: blockdev reset ...passed 00:21:52.154 Test: blockdev write read 8 blocks ...passed 00:21:52.154 Test: blockdev write read size > 128k ...passed 00:21:52.154 Test: blockdev write read invalid size ...passed 00:21:52.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.154 Test: blockdev write read max offset ...passed 00:21:52.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.154 Test: blockdev writev readv 8 blocks ...passed 00:21:52.154 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.154 Test: blockdev writev readv block ...passed 00:21:52.154 Test: blockdev writev readv size > 128k ...passed 00:21:52.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.154 Test: blockdev comparev and writev ...passed 00:21:52.154 Test: blockdev nvme passthru rw ...passed 00:21:52.154 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.154 Test: blockdev nvme admin passthru ...passed 00:21:52.154 Test: blockdev copy ...passed 00:21:52.154 Suite: bdevio tests on: Malloc2p4 00:21:52.154 Test: blockdev write read block ...passed 00:21:52.154 Test: blockdev write zeroes read block ...passed 00:21:52.154 Test: blockdev write zeroes read no split ...passed 00:21:52.154 Test: blockdev write zeroes read split ...passed 00:21:52.154 Test: blockdev write zeroes read split partial ...passed 00:21:52.154 Test: blockdev reset ...passed 00:21:52.154 Test: blockdev write read 8 blocks ...passed 00:21:52.154 Test: blockdev write read size > 128k ...passed 00:21:52.154 Test: blockdev write read invalid size ...passed 00:21:52.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.154 Test: blockdev write read max offset ...passed 00:21:52.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.154 Test: blockdev writev readv 8 blocks ...passed 00:21:52.154 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.154 Test: blockdev writev readv block ...passed 00:21:52.155 Test: blockdev writev readv size > 128k ...passed 00:21:52.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.155 Test: blockdev comparev and writev ...passed 00:21:52.155 Test: blockdev nvme passthru rw ...passed 00:21:52.155 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.155 Test: blockdev nvme admin passthru ...passed 00:21:52.155 Test: blockdev copy ...passed 00:21:52.155 Suite: bdevio tests on: Malloc2p3 00:21:52.155 Test: blockdev write read block ...passed 00:21:52.155 Test: blockdev write zeroes read block ...passed 00:21:52.155 Test: blockdev write zeroes read no split ...passed 00:21:52.414 Test: blockdev write zeroes read split ...passed 00:21:52.414 Test: blockdev write zeroes read split partial ...passed 00:21:52.414 Test: blockdev reset ...passed 00:21:52.414 Test: blockdev write read 8 blocks ...passed 00:21:52.414 Test: blockdev write read size > 128k ...passed 00:21:52.414 Test: blockdev write read invalid size ...passed 00:21:52.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.414 Test: blockdev write read max offset ...passed 00:21:52.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.414 Test: blockdev writev readv 8 blocks ...passed 00:21:52.414 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.414 Test: blockdev writev readv block ...passed 00:21:52.414 Test: blockdev writev readv size > 128k ...passed 00:21:52.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.414 Test: blockdev comparev and writev ...passed 00:21:52.414 Test: blockdev nvme passthru rw ...passed 00:21:52.414 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.414 Test: blockdev nvme admin passthru ...passed 00:21:52.414 Test: blockdev copy ...passed 00:21:52.414 Suite: bdevio tests on: Malloc2p2 00:21:52.414 Test: blockdev write read block ...passed 00:21:52.414 Test: blockdev write zeroes read block ...passed 00:21:52.414 Test: blockdev write zeroes read no split ...passed 00:21:52.414 Test: blockdev write zeroes read split ...passed 00:21:52.414 Test: blockdev write zeroes read split partial ...passed 00:21:52.414 Test: blockdev reset ...passed 00:21:52.414 Test: blockdev write read 8 blocks ...passed 00:21:52.414 Test: blockdev write read size > 128k ...passed 00:21:52.414 Test: blockdev write read invalid size ...passed 00:21:52.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.414 Test: blockdev write read max offset ...passed 00:21:52.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.414 Test: blockdev writev readv 8 blocks ...passed 00:21:52.414 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.414 Test: blockdev writev readv block ...passed 00:21:52.414 Test: blockdev writev readv size > 128k ...passed 00:21:52.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.414 Test: blockdev comparev and writev ...passed 00:21:52.414 Test: blockdev nvme passthru rw ...passed 00:21:52.414 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.414 Test: blockdev nvme admin passthru ...passed 00:21:52.414 Test: blockdev copy ...passed 00:21:52.414 Suite: bdevio tests on: Malloc2p1 00:21:52.414 Test: blockdev write read block ...passed 00:21:52.414 Test: blockdev write zeroes read block ...passed 00:21:52.414 Test: blockdev write zeroes read no split ...passed 00:21:52.414 Test: blockdev write zeroes read split ...passed 00:21:52.414 Test: blockdev write zeroes read split partial ...passed 00:21:52.414 Test: blockdev reset ...passed 00:21:52.414 Test: blockdev write read 8 blocks ...passed 00:21:52.414 Test: blockdev write read size > 128k ...passed 00:21:52.414 Test: blockdev write read invalid size ...passed 00:21:52.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.414 Test: blockdev write read max offset ...passed 00:21:52.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.414 Test: blockdev writev readv 8 blocks ...passed 00:21:52.414 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.414 Test: blockdev writev readv block ...passed 00:21:52.414 Test: blockdev writev readv size > 128k ...passed 00:21:52.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.414 Test: blockdev comparev and writev ...passed 00:21:52.414 Test: blockdev nvme passthru rw ...passed 00:21:52.414 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.414 Test: blockdev nvme admin passthru ...passed 00:21:52.414 Test: blockdev copy ...passed 00:21:52.414 Suite: bdevio tests on: Malloc2p0 00:21:52.414 Test: blockdev write read block ...passed 00:21:52.414 Test: blockdev write zeroes read block ...passed 00:21:52.414 Test: blockdev write zeroes read no split ...passed 00:21:52.673 Test: blockdev write zeroes read split ...passed 00:21:52.673 Test: blockdev write zeroes read split partial ...passed 00:21:52.673 Test: blockdev reset ...passed 00:21:52.673 Test: blockdev write read 8 blocks ...passed 00:21:52.673 Test: blockdev write read size > 128k ...passed 00:21:52.673 Test: blockdev write read invalid size ...passed 00:21:52.673 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.673 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.673 Test: blockdev write read max offset ...passed 00:21:52.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.674 Test: blockdev writev readv 8 blocks ...passed 00:21:52.674 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.674 Test: blockdev writev readv block ...passed 00:21:52.674 Test: blockdev writev readv size > 128k ...passed 00:21:52.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.674 Test: blockdev comparev and writev ...passed 00:21:52.674 Test: blockdev nvme passthru rw ...passed 00:21:52.674 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.674 Test: blockdev nvme admin passthru ...passed 00:21:52.674 Test: blockdev copy ...passed 00:21:52.674 Suite: bdevio tests on: Malloc1p1 00:21:52.674 Test: blockdev write read block ...passed 00:21:52.674 Test: blockdev write zeroes read block ...passed 00:21:52.674 Test: blockdev write zeroes read no split ...passed 00:21:52.674 Test: blockdev write zeroes read split ...passed 00:21:52.674 Test: blockdev write zeroes read split partial ...passed 00:21:52.674 Test: blockdev reset ...passed 00:21:52.674 Test: blockdev write read 8 blocks ...passed 00:21:52.674 Test: blockdev write read size > 128k ...passed 00:21:52.674 Test: blockdev write read invalid size ...passed 00:21:52.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.674 Test: blockdev write read max offset ...passed 00:21:52.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.674 Test: blockdev writev readv 8 blocks ...passed 00:21:52.674 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.674 Test: blockdev writev readv block ...passed 00:21:52.674 Test: blockdev writev readv size > 128k ...passed 00:21:52.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.674 Test: blockdev comparev and writev ...passed 00:21:52.674 Test: blockdev nvme passthru rw ...passed 00:21:52.674 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.674 Test: blockdev nvme admin passthru ...passed 00:21:52.674 Test: blockdev copy ...passed 00:21:52.674 Suite: bdevio tests on: Malloc1p0 00:21:52.674 Test: blockdev write read block ...passed 00:21:52.674 Test: blockdev write zeroes read block ...passed 00:21:52.674 Test: blockdev write zeroes read no split ...passed 00:21:52.674 Test: blockdev write zeroes read split ...passed 00:21:52.674 Test: blockdev write zeroes read split partial ...passed 00:21:52.674 Test: blockdev reset ...passed 00:21:52.674 Test: blockdev write read 8 blocks ...passed 00:21:52.674 Test: blockdev write read size > 128k ...passed 00:21:52.674 Test: blockdev write read invalid size ...passed 00:21:52.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.674 Test: blockdev write read max offset ...passed 00:21:52.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.674 Test: blockdev writev readv 8 blocks ...passed 00:21:52.674 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.674 Test: blockdev writev readv block ...passed 00:21:52.674 Test: blockdev writev readv size > 128k ...passed 00:21:52.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.674 Test: blockdev comparev and writev ...passed 00:21:52.674 Test: blockdev nvme passthru rw ...passed 00:21:52.674 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.674 Test: blockdev nvme admin passthru ...passed 00:21:52.674 Test: blockdev copy ...passed 00:21:52.674 Suite: bdevio tests on: Malloc0 00:21:52.674 Test: blockdev write read block ...passed 00:21:52.674 Test: blockdev write zeroes read block ...passed 00:21:52.674 Test: blockdev write zeroes read no split ...passed 00:21:52.674 Test: blockdev write zeroes read split ...passed 00:21:52.932 Test: blockdev write zeroes read split partial ...passed 00:21:52.932 Test: blockdev reset ...passed 00:21:52.932 Test: blockdev write read 8 blocks ...passed 00:21:52.932 Test: blockdev write read size > 128k ...passed 00:21:52.932 Test: blockdev write read invalid size ...passed 00:21:52.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.932 Test: blockdev write read max offset ...passed 00:21:52.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.932 Test: blockdev writev readv 8 blocks ...passed 00:21:52.932 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.932 Test: blockdev writev readv block ...passed 00:21:52.932 Test: blockdev writev readv size > 128k ...passed 00:21:52.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.932 Test: blockdev comparev and writev ...passed 00:21:52.932 Test: blockdev nvme passthru rw ...passed 00:21:52.932 Test: blockdev nvme passthru vendor specific ...passed 00:21:52.932 Test: blockdev nvme admin passthru ...passed 00:21:52.932 Test: blockdev copy ...passed 00:21:52.932 00:21:52.932 Run Summary: Type Total Ran Passed Failed Inactive 00:21:52.932 suites 16 16 n/a 0 0 00:21:52.932 tests 368 368 368 0 0 00:21:52.932 asserts 2224 2224 2224 0 n/a 00:21:52.932 00:21:52.932 Elapsed time = 4.217 seconds 00:21:52.932 0 00:21:52.932 01:51:52 -- bdev/blockdev.sh@295 -- # killprocess 116733 00:21:52.932 01:51:52 -- common/autotest_common.sh@936 -- # '[' -z 116733 ']' 00:21:52.932 01:51:52 -- common/autotest_common.sh@940 -- # kill -0 116733 00:21:52.932 01:51:52 -- common/autotest_common.sh@941 -- # uname 00:21:52.932 01:51:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.932 01:51:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116733 00:21:52.932 killing process with pid 116733 00:21:52.932 01:51:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:52.932 01:51:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:52.932 01:51:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116733' 00:21:52.932 01:51:52 -- common/autotest_common.sh@955 -- # kill 116733 00:21:52.932 01:51:52 -- common/autotest_common.sh@960 -- # wait 116733 00:21:55.463 ************************************ 00:21:55.463 END TEST bdev_bounds 00:21:55.463 ************************************ 00:21:55.463 01:51:55 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:21:55.463 00:21:55.463 real 0m6.040s 00:21:55.463 user 0m15.427s 00:21:55.463 sys 0m0.620s 00:21:55.463 01:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.463 01:51:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.463 01:51:55 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:21:55.463 01:51:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:55.463 01:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:55.463 01:51:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.722 ************************************ 00:21:55.722 START TEST bdev_nbd 00:21:55.722 ************************************ 00:21:55.722 01:51:55 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:21:55.722 01:51:55 -- bdev/blockdev.sh@300 -- # uname -s 00:21:55.722 01:51:55 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:21:55.722 01:51:55 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:55.722 01:51:55 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:55.722 01:51:55 -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:21:55.722 01:51:55 -- bdev/blockdev.sh@304 -- # local bdev_all 00:21:55.722 01:51:55 -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:21:55.722 01:51:55 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:21:55.722 01:51:55 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:55.722 01:51:55 -- bdev/blockdev.sh@311 -- # local nbd_all 00:21:55.722 01:51:55 -- bdev/blockdev.sh@312 -- # bdev_num=16 00:21:55.722 01:51:55 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:55.722 01:51:55 -- bdev/blockdev.sh@314 -- # local nbd_list 00:21:55.722 01:51:55 -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:21:55.722 01:51:55 -- bdev/blockdev.sh@315 -- # local bdev_list 00:21:55.722 01:51:55 -- bdev/blockdev.sh@318 -- # nbd_pid=116845 00:21:55.722 01:51:55 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:55.722 01:51:55 -- bdev/blockdev.sh@320 -- # waitforlisten 116845 /var/tmp/spdk-nbd.sock 00:21:55.722 01:51:55 -- common/autotest_common.sh@817 -- # '[' -z 116845 ']' 00:21:55.722 01:51:55 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:55.722 01:51:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:55.722 01:51:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.722 01:51:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:55.722 01:51:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.722 01:51:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.722 [2024-04-24 01:51:55.679117] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:21:55.722 [2024-04-24 01:51:55.679282] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.981 [2024-04-24 01:51:55.842969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.981 [2024-04-24 01:51:56.064742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.550 [2024-04-24 01:51:56.549482] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:56.550 [2024-04-24 01:51:56.549801] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:21:56.550 [2024-04-24 01:51:56.557430] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:56.550 [2024-04-24 01:51:56.557625] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:21:56.550 [2024-04-24 01:51:56.565483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:56.550 [2024-04-24 01:51:56.565646] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:21:56.550 [2024-04-24 01:51:56.565773] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:21:56.809 [2024-04-24 01:51:56.787349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:56.809 [2024-04-24 01:51:56.787693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.809 [2024-04-24 01:51:56.787772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:56.809 [2024-04-24 01:51:56.788075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.809 [2024-04-24 01:51:56.790837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.809 [2024-04-24 01:51:56.791036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:21:57.376 01:51:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:57.376 01:51:57 -- common/autotest_common.sh@850 -- # return 0 00:21:57.377 01:51:57 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@24 -- # local i 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:57.377 01:51:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:57.635 01:51:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:57.635 01:51:57 -- common/autotest_common.sh@855 -- # local i 00:21:57.635 01:51:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:57.635 01:51:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:57.635 01:51:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:57.635 01:51:57 -- common/autotest_common.sh@859 -- # break 00:21:57.635 01:51:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:57.635 01:51:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:57.635 01:51:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.635 1+0 records in 00:21:57.635 1+0 records out 00:21:57.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037514 s, 10.9 MB/s 00:21:57.635 01:51:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.635 01:51:57 -- common/autotest_common.sh@872 -- # size=4096 00:21:57.635 01:51:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.635 01:51:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:57.635 01:51:57 -- common/autotest_common.sh@875 -- # return 0 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:57.635 01:51:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:57.894 01:51:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:57.894 01:51:57 -- common/autotest_common.sh@855 -- # local i 00:21:57.894 01:51:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:57.894 01:51:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:57.894 01:51:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:57.894 01:51:57 -- common/autotest_common.sh@859 -- # break 00:21:57.894 01:51:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:57.894 01:51:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:57.894 01:51:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.894 1+0 records in 00:21:57.894 1+0 records out 00:21:57.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285874 s, 14.3 MB/s 00:21:57.894 01:51:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.894 01:51:57 -- common/autotest_common.sh@872 -- # size=4096 00:21:57.894 01:51:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.894 01:51:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:57.894 01:51:57 -- common/autotest_common.sh@875 -- # return 0 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:57.894 01:51:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:21:58.153 01:51:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:58.153 01:51:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:58.153 01:51:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:58.153 01:51:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:21:58.153 01:51:58 -- common/autotest_common.sh@855 -- # local i 00:21:58.153 01:51:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:58.153 01:51:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:58.153 01:51:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:21:58.153 01:51:58 -- common/autotest_common.sh@859 -- # break 00:21:58.153 01:51:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:58.153 01:51:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:58.153 01:51:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.153 1+0 records in 00:21:58.153 1+0 records out 00:21:58.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475205 s, 8.6 MB/s 00:21:58.153 01:51:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.153 01:51:58 -- common/autotest_common.sh@872 -- # size=4096 00:21:58.153 01:51:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.153 01:51:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:58.153 01:51:58 -- common/autotest_common.sh@875 -- # return 0 00:21:58.154 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:58.154 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:58.154 01:51:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:58.413 01:51:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:21:58.413 01:51:58 -- common/autotest_common.sh@855 -- # local i 00:21:58.413 01:51:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:58.413 01:51:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:58.413 01:51:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:21:58.413 01:51:58 -- common/autotest_common.sh@859 -- # break 00:21:58.413 01:51:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:58.413 01:51:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:58.413 01:51:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.413 1+0 records in 00:21:58.413 1+0 records out 00:21:58.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462922 s, 8.8 MB/s 00:21:58.413 01:51:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.413 01:51:58 -- common/autotest_common.sh@872 -- # size=4096 00:21:58.413 01:51:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.413 01:51:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:58.413 01:51:58 -- common/autotest_common.sh@875 -- # return 0 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:58.413 01:51:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:58.672 01:51:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:21:58.672 01:51:58 -- common/autotest_common.sh@855 -- # local i 00:21:58.672 01:51:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:58.672 01:51:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:58.672 01:51:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:21:58.672 01:51:58 -- common/autotest_common.sh@859 -- # break 00:21:58.672 01:51:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:58.672 01:51:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:58.672 01:51:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.672 1+0 records in 00:21:58.672 1+0 records out 00:21:58.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478206 s, 8.6 MB/s 00:21:58.672 01:51:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.672 01:51:58 -- common/autotest_common.sh@872 -- # size=4096 00:21:58.672 01:51:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.672 01:51:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:58.672 01:51:58 -- common/autotest_common.sh@875 -- # return 0 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:58.672 01:51:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:21:58.947 01:51:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:58.947 01:51:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:58.947 01:51:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:58.947 01:51:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:21:58.947 01:51:58 -- common/autotest_common.sh@855 -- # local i 00:21:58.947 01:51:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:58.947 01:51:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:58.947 01:51:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:21:58.947 01:51:58 -- common/autotest_common.sh@859 -- # break 00:21:58.947 01:51:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:58.947 01:51:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:58.947 01:51:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.947 1+0 records in 00:21:58.947 1+0 records out 00:21:58.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857124 s, 4.8 MB/s 00:21:58.947 01:51:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.947 01:51:59 -- common/autotest_common.sh@872 -- # size=4096 00:21:58.947 01:51:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.947 01:51:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:58.947 01:51:59 -- common/autotest_common.sh@875 -- # return 0 00:21:58.947 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:58.947 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:58.947 01:51:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:21:59.295 01:51:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:21:59.295 01:51:59 -- common/autotest_common.sh@855 -- # local i 00:21:59.295 01:51:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:59.295 01:51:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:59.295 01:51:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:21:59.295 01:51:59 -- common/autotest_common.sh@859 -- # break 00:21:59.295 01:51:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:59.295 01:51:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:59.295 01:51:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.295 1+0 records in 00:21:59.295 1+0 records out 00:21:59.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802674 s, 5.1 MB/s 00:21:59.295 01:51:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.295 01:51:59 -- common/autotest_common.sh@872 -- # size=4096 00:21:59.295 01:51:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.295 01:51:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:59.295 01:51:59 -- common/autotest_common.sh@875 -- # return 0 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:59.295 01:51:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:21:59.555 01:51:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:21:59.555 01:51:59 -- common/autotest_common.sh@855 -- # local i 00:21:59.555 01:51:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:59.555 01:51:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:59.555 01:51:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:21:59.555 01:51:59 -- common/autotest_common.sh@859 -- # break 00:21:59.555 01:51:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:59.555 01:51:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:59.555 01:51:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.555 1+0 records in 00:21:59.555 1+0 records out 00:21:59.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628666 s, 6.5 MB/s 00:21:59.555 01:51:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.555 01:51:59 -- common/autotest_common.sh@872 -- # size=4096 00:21:59.555 01:51:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.555 01:51:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:59.555 01:51:59 -- common/autotest_common.sh@875 -- # return 0 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:59.555 01:51:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:21:59.813 01:51:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:21:59.813 01:51:59 -- common/autotest_common.sh@855 -- # local i 00:21:59.813 01:51:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:59.813 01:51:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:59.813 01:51:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:21:59.813 01:51:59 -- common/autotest_common.sh@859 -- # break 00:21:59.813 01:51:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:59.813 01:51:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:59.813 01:51:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.813 1+0 records in 00:21:59.813 1+0 records out 00:21:59.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053548 s, 7.6 MB/s 00:21:59.813 01:51:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.813 01:51:59 -- common/autotest_common.sh@872 -- # size=4096 00:21:59.813 01:51:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.813 01:51:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:59.813 01:51:59 -- common/autotest_common.sh@875 -- # return 0 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:21:59.813 01:51:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:22:00.070 01:52:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:22:00.070 01:52:00 -- common/autotest_common.sh@855 -- # local i 00:22:00.070 01:52:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:00.070 01:52:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:00.070 01:52:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:22:00.070 01:52:00 -- common/autotest_common.sh@859 -- # break 00:22:00.070 01:52:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:00.070 01:52:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:00.070 01:52:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.070 1+0 records in 00:22:00.070 1+0 records out 00:22:00.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565123 s, 7.2 MB/s 00:22:00.070 01:52:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.070 01:52:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:00.070 01:52:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.070 01:52:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:00.070 01:52:00 -- common/autotest_common.sh@875 -- # return 0 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:00.070 01:52:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:22:00.328 01:52:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:22:00.329 01:52:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:22:00.329 01:52:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:22:00.329 01:52:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:22:00.329 01:52:00 -- common/autotest_common.sh@855 -- # local i 00:22:00.329 01:52:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:00.329 01:52:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:00.329 01:52:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:22:00.329 01:52:00 -- common/autotest_common.sh@859 -- # break 00:22:00.329 01:52:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:00.329 01:52:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:00.329 01:52:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.329 1+0 records in 00:22:00.329 1+0 records out 00:22:00.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610305 s, 6.7 MB/s 00:22:00.329 01:52:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.329 01:52:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:00.329 01:52:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.329 01:52:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:00.329 01:52:00 -- common/autotest_common.sh@875 -- # return 0 00:22:00.329 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.329 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:00.329 01:52:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:22:00.587 01:52:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:22:00.587 01:52:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:22:00.587 01:52:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:22:00.587 01:52:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:22:00.587 01:52:00 -- common/autotest_common.sh@855 -- # local i 00:22:00.587 01:52:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:00.587 01:52:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:00.587 01:52:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:22:00.845 01:52:00 -- common/autotest_common.sh@859 -- # break 00:22:00.845 01:52:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:00.845 01:52:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:00.845 01:52:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.845 1+0 records in 00:22:00.845 1+0 records out 00:22:00.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944627 s, 4.3 MB/s 00:22:00.845 01:52:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.845 01:52:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:00.845 01:52:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.845 01:52:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:00.845 01:52:00 -- common/autotest_common.sh@875 -- # return 0 00:22:00.845 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.845 01:52:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:00.845 01:52:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:22:01.104 01:52:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:22:01.104 01:52:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:22:01.104 01:52:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:22:01.104 01:52:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:22:01.104 01:52:00 -- common/autotest_common.sh@855 -- # local i 00:22:01.104 01:52:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:01.104 01:52:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:01.104 01:52:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:22:01.104 01:52:00 -- common/autotest_common.sh@859 -- # break 00:22:01.104 01:52:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:01.104 01:52:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:01.104 01:52:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.104 1+0 records in 00:22:01.104 1+0 records out 00:22:01.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114816 s, 3.6 MB/s 00:22:01.104 01:52:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.104 01:52:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:01.104 01:52:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.104 01:52:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:01.104 01:52:01 -- common/autotest_common.sh@875 -- # return 0 00:22:01.104 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.104 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:01.104 01:52:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:22:01.362 01:52:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:22:01.362 01:52:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:22:01.363 01:52:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:22:01.363 01:52:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:22:01.363 01:52:01 -- common/autotest_common.sh@855 -- # local i 00:22:01.363 01:52:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:01.363 01:52:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:01.363 01:52:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:22:01.363 01:52:01 -- common/autotest_common.sh@859 -- # break 00:22:01.363 01:52:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:01.363 01:52:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:01.363 01:52:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.363 1+0 records in 00:22:01.363 1+0 records out 00:22:01.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812031 s, 5.0 MB/s 00:22:01.363 01:52:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.363 01:52:01 -- common/autotest_common.sh@872 -- # size=4096 00:22:01.363 01:52:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.363 01:52:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:01.363 01:52:01 -- common/autotest_common.sh@875 -- # return 0 00:22:01.363 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.363 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:01.363 01:52:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:22:01.621 01:52:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:22:01.621 01:52:01 -- common/autotest_common.sh@855 -- # local i 00:22:01.621 01:52:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:01.621 01:52:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:01.621 01:52:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:22:01.621 01:52:01 -- common/autotest_common.sh@859 -- # break 00:22:01.621 01:52:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:01.621 01:52:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:01.621 01:52:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.621 1+0 records in 00:22:01.621 1+0 records out 00:22:01.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000964552 s, 4.2 MB/s 00:22:01.621 01:52:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.621 01:52:01 -- common/autotest_common.sh@872 -- # size=4096 00:22:01.621 01:52:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.621 01:52:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:01.621 01:52:01 -- common/autotest_common.sh@875 -- # return 0 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:01.621 01:52:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:22:01.879 01:52:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:22:01.879 01:52:01 -- common/autotest_common.sh@855 -- # local i 00:22:01.879 01:52:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:01.879 01:52:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:01.879 01:52:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:22:01.879 01:52:01 -- common/autotest_common.sh@859 -- # break 00:22:01.879 01:52:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:01.879 01:52:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:01.879 01:52:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.879 1+0 records in 00:22:01.879 1+0 records out 00:22:01.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140007 s, 2.9 MB/s 00:22:01.879 01:52:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.879 01:52:01 -- common/autotest_common.sh@872 -- # size=4096 00:22:01.879 01:52:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.879 01:52:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:01.879 01:52:01 -- common/autotest_common.sh@875 -- # return 0 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:22:01.879 01:52:01 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:02.137 01:52:02 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd0", 00:22:02.137 "bdev_name": "Malloc0" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd1", 00:22:02.137 "bdev_name": "Malloc1p0" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd2", 00:22:02.137 "bdev_name": "Malloc1p1" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd3", 00:22:02.137 "bdev_name": "Malloc2p0" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd4", 00:22:02.137 "bdev_name": "Malloc2p1" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd5", 00:22:02.137 "bdev_name": "Malloc2p2" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd6", 00:22:02.137 "bdev_name": "Malloc2p3" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd7", 00:22:02.137 "bdev_name": "Malloc2p4" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd8", 00:22:02.137 "bdev_name": "Malloc2p5" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd9", 00:22:02.137 "bdev_name": "Malloc2p6" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd10", 00:22:02.137 "bdev_name": "Malloc2p7" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd11", 00:22:02.137 "bdev_name": "TestPT" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd12", 00:22:02.137 "bdev_name": "raid0" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd13", 00:22:02.137 "bdev_name": "concat0" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd14", 00:22:02.137 "bdev_name": "raid1" 00:22:02.137 }, 00:22:02.137 { 00:22:02.137 "nbd_device": "/dev/nbd15", 00:22:02.137 "bdev_name": "AIO0" 00:22:02.137 } 00:22:02.137 ]' 00:22:02.138 01:52:02 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:02.138 01:52:02 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:02.138 01:52:02 -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd0", 00:22:02.138 "bdev_name": "Malloc0" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd1", 00:22:02.138 "bdev_name": "Malloc1p0" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd2", 00:22:02.138 "bdev_name": "Malloc1p1" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd3", 00:22:02.138 "bdev_name": "Malloc2p0" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd4", 00:22:02.138 "bdev_name": "Malloc2p1" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd5", 00:22:02.138 "bdev_name": "Malloc2p2" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd6", 00:22:02.138 "bdev_name": "Malloc2p3" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd7", 00:22:02.138 "bdev_name": "Malloc2p4" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd8", 00:22:02.138 "bdev_name": "Malloc2p5" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd9", 00:22:02.138 "bdev_name": "Malloc2p6" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd10", 00:22:02.138 "bdev_name": "Malloc2p7" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd11", 00:22:02.138 "bdev_name": "TestPT" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd12", 00:22:02.138 "bdev_name": "raid0" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd13", 00:22:02.138 "bdev_name": "concat0" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd14", 00:22:02.138 "bdev_name": "raid1" 00:22:02.138 }, 00:22:02.138 { 00:22:02.138 "nbd_device": "/dev/nbd15", 00:22:02.138 "bdev_name": "AIO0" 00:22:02.138 } 00:22:02.138 ]' 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@51 -- # local i 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.396 01:52:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:02.654 01:52:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@41 -- # break 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.655 01:52:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@41 -- # break 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.998 01:52:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@41 -- # break 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.998 01:52:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@41 -- # break 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.256 01:52:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@41 -- # break 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.514 01:52:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:03.772 01:52:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:03.772 01:52:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:03.772 01:52:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@41 -- # break 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.773 01:52:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:22:04.031 01:52:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:22:04.031 01:52:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@41 -- # break 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.032 01:52:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@41 -- # break 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.291 01:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@41 -- # break 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.549 01:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@41 -- # break 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.807 01:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@41 -- # break 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.065 01:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@41 -- # break 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.323 01:52:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@41 -- # break 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.582 01:52:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@41 -- # break 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.841 01:52:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@41 -- # break 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:06.099 01:52:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@41 -- # break 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:06.357 01:52:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@65 -- # true 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@65 -- # count=0 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@122 -- # count=0 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@127 -- # return 0 00:22:06.616 01:52:06 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:06.616 01:52:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:06.617 01:52:06 -- bdev/nbd_common.sh@12 -- # local i 00:22:06.617 01:52:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:06.617 01:52:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:06.617 01:52:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:22:06.876 /dev/nbd0 00:22:06.876 01:52:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:06.876 01:52:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:06.876 01:52:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:06.876 01:52:06 -- common/autotest_common.sh@855 -- # local i 00:22:06.876 01:52:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:06.876 01:52:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:06.876 01:52:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:06.876 01:52:06 -- common/autotest_common.sh@859 -- # break 00:22:06.876 01:52:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:06.876 01:52:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:06.876 01:52:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:06.876 1+0 records in 00:22:06.876 1+0 records out 00:22:06.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400659 s, 10.2 MB/s 00:22:06.876 01:52:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.876 01:52:06 -- common/autotest_common.sh@872 -- # size=4096 00:22:06.876 01:52:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.876 01:52:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:06.876 01:52:06 -- common/autotest_common.sh@875 -- # return 0 00:22:06.876 01:52:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.876 01:52:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:06.876 01:52:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:22:07.135 /dev/nbd1 00:22:07.135 01:52:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:07.135 01:52:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:07.135 01:52:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:07.135 01:52:07 -- common/autotest_common.sh@855 -- # local i 00:22:07.135 01:52:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:07.135 01:52:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:07.135 01:52:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:07.135 01:52:07 -- common/autotest_common.sh@859 -- # break 00:22:07.135 01:52:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:07.135 01:52:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:07.135 01:52:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.135 1+0 records in 00:22:07.135 1+0 records out 00:22:07.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426739 s, 9.6 MB/s 00:22:07.136 01:52:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.136 01:52:07 -- common/autotest_common.sh@872 -- # size=4096 00:22:07.136 01:52:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.136 01:52:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:07.136 01:52:07 -- common/autotest_common.sh@875 -- # return 0 00:22:07.136 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.136 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:07.136 01:52:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:22:07.394 /dev/nbd10 00:22:07.394 01:52:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:22:07.394 01:52:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:22:07.394 01:52:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:22:07.395 01:52:07 -- common/autotest_common.sh@855 -- # local i 00:22:07.395 01:52:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:07.395 01:52:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:07.395 01:52:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:22:07.395 01:52:07 -- common/autotest_common.sh@859 -- # break 00:22:07.395 01:52:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:07.395 01:52:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:07.395 01:52:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.395 1+0 records in 00:22:07.395 1+0 records out 00:22:07.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411082 s, 10.0 MB/s 00:22:07.395 01:52:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.395 01:52:07 -- common/autotest_common.sh@872 -- # size=4096 00:22:07.395 01:52:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.395 01:52:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:07.395 01:52:07 -- common/autotest_common.sh@875 -- # return 0 00:22:07.395 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.395 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:07.395 01:52:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:22:07.653 /dev/nbd11 00:22:07.653 01:52:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:22:07.653 01:52:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:22:07.653 01:52:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:22:07.653 01:52:07 -- common/autotest_common.sh@855 -- # local i 00:22:07.653 01:52:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:07.653 01:52:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:07.653 01:52:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:22:07.653 01:52:07 -- common/autotest_common.sh@859 -- # break 00:22:07.653 01:52:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:07.653 01:52:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:07.653 01:52:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.653 1+0 records in 00:22:07.653 1+0 records out 00:22:07.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420959 s, 9.7 MB/s 00:22:07.923 01:52:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.923 01:52:07 -- common/autotest_common.sh@872 -- # size=4096 00:22:07.923 01:52:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.923 01:52:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:07.923 01:52:07 -- common/autotest_common.sh@875 -- # return 0 00:22:07.923 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.923 01:52:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:07.923 01:52:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:22:08.213 /dev/nbd12 00:22:08.213 01:52:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:22:08.213 01:52:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:22:08.213 01:52:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:22:08.213 01:52:08 -- common/autotest_common.sh@855 -- # local i 00:22:08.213 01:52:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:08.213 01:52:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:08.213 01:52:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:22:08.213 01:52:08 -- common/autotest_common.sh@859 -- # break 00:22:08.213 01:52:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:08.213 01:52:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:08.213 01:52:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:08.213 1+0 records in 00:22:08.213 1+0 records out 00:22:08.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403877 s, 10.1 MB/s 00:22:08.213 01:52:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.213 01:52:08 -- common/autotest_common.sh@872 -- # size=4096 00:22:08.213 01:52:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.213 01:52:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:08.213 01:52:08 -- common/autotest_common.sh@875 -- # return 0 00:22:08.213 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:08.213 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:08.213 01:52:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:22:08.472 /dev/nbd13 00:22:08.472 01:52:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:22:08.472 01:52:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:22:08.472 01:52:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:22:08.472 01:52:08 -- common/autotest_common.sh@855 -- # local i 00:22:08.472 01:52:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:08.472 01:52:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:08.472 01:52:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:22:08.472 01:52:08 -- common/autotest_common.sh@859 -- # break 00:22:08.472 01:52:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:08.472 01:52:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:08.472 01:52:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:08.472 1+0 records in 00:22:08.472 1+0 records out 00:22:08.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545135 s, 7.5 MB/s 00:22:08.472 01:52:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.472 01:52:08 -- common/autotest_common.sh@872 -- # size=4096 00:22:08.472 01:52:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.472 01:52:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:08.472 01:52:08 -- common/autotest_common.sh@875 -- # return 0 00:22:08.472 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:08.472 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:08.472 01:52:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:22:08.731 /dev/nbd14 00:22:08.731 01:52:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:22:08.731 01:52:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:22:08.731 01:52:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:22:08.731 01:52:08 -- common/autotest_common.sh@855 -- # local i 00:22:08.731 01:52:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:08.731 01:52:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:08.731 01:52:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:22:08.731 01:52:08 -- common/autotest_common.sh@859 -- # break 00:22:08.731 01:52:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:08.731 01:52:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:08.731 01:52:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:08.731 1+0 records in 00:22:08.731 1+0 records out 00:22:08.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555353 s, 7.4 MB/s 00:22:08.731 01:52:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.731 01:52:08 -- common/autotest_common.sh@872 -- # size=4096 00:22:08.731 01:52:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.731 01:52:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:08.731 01:52:08 -- common/autotest_common.sh@875 -- # return 0 00:22:08.731 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:08.731 01:52:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:08.731 01:52:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:22:08.990 /dev/nbd15 00:22:08.990 01:52:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:22:08.990 01:52:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:22:08.990 01:52:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:22:08.990 01:52:08 -- common/autotest_common.sh@855 -- # local i 00:22:08.990 01:52:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:08.990 01:52:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:08.990 01:52:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:22:08.990 01:52:08 -- common/autotest_common.sh@859 -- # break 00:22:08.990 01:52:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:08.990 01:52:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:08.990 01:52:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:08.990 1+0 records in 00:22:08.990 1+0 records out 00:22:08.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557812 s, 7.3 MB/s 00:22:08.990 01:52:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.990 01:52:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:08.990 01:52:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.990 01:52:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:08.990 01:52:09 -- common/autotest_common.sh@875 -- # return 0 00:22:08.990 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:08.990 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:08.990 01:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:22:09.247 /dev/nbd2 00:22:09.247 01:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:22:09.247 01:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:22:09.247 01:52:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:22:09.247 01:52:09 -- common/autotest_common.sh@855 -- # local i 00:22:09.247 01:52:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:09.247 01:52:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:09.247 01:52:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:22:09.248 01:52:09 -- common/autotest_common.sh@859 -- # break 00:22:09.248 01:52:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:09.248 01:52:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:09.248 01:52:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.248 1+0 records in 00:22:09.248 1+0 records out 00:22:09.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527665 s, 7.8 MB/s 00:22:09.248 01:52:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.248 01:52:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:09.248 01:52:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.248 01:52:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:09.248 01:52:09 -- common/autotest_common.sh@875 -- # return 0 00:22:09.248 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.248 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:09.248 01:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:22:09.505 /dev/nbd3 00:22:09.505 01:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:22:09.505 01:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:22:09.505 01:52:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:22:09.505 01:52:09 -- common/autotest_common.sh@855 -- # local i 00:22:09.505 01:52:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:09.505 01:52:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:09.505 01:52:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:22:09.505 01:52:09 -- common/autotest_common.sh@859 -- # break 00:22:09.505 01:52:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:09.505 01:52:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:09.505 01:52:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.505 1+0 records in 00:22:09.505 1+0 records out 00:22:09.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631513 s, 6.5 MB/s 00:22:09.505 01:52:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.505 01:52:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:09.505 01:52:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.505 01:52:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:09.505 01:52:09 -- common/autotest_common.sh@875 -- # return 0 00:22:09.505 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.505 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:09.505 01:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:22:09.763 /dev/nbd4 00:22:09.763 01:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:22:09.763 01:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:22:09.763 01:52:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:22:09.763 01:52:09 -- common/autotest_common.sh@855 -- # local i 00:22:09.763 01:52:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:09.763 01:52:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:09.763 01:52:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:22:09.763 01:52:09 -- common/autotest_common.sh@859 -- # break 00:22:09.763 01:52:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:09.763 01:52:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:09.763 01:52:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.763 1+0 records in 00:22:09.763 1+0 records out 00:22:09.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458729 s, 8.9 MB/s 00:22:09.763 01:52:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.763 01:52:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:09.763 01:52:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.763 01:52:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:09.763 01:52:09 -- common/autotest_common.sh@875 -- # return 0 00:22:09.763 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.763 01:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:09.763 01:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:22:10.021 /dev/nbd5 00:22:10.278 01:52:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:22:10.278 01:52:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:22:10.278 01:52:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:22:10.278 01:52:10 -- common/autotest_common.sh@855 -- # local i 00:22:10.278 01:52:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:22:10.279 01:52:10 -- common/autotest_common.sh@859 -- # break 00:22:10.279 01:52:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:10.279 1+0 records in 00:22:10.279 1+0 records out 00:22:10.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758011 s, 5.4 MB/s 00:22:10.279 01:52:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.279 01:52:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:10.279 01:52:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.279 01:52:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:10.279 01:52:10 -- common/autotest_common.sh@875 -- # return 0 00:22:10.279 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:10.279 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:10.279 01:52:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:22:10.279 /dev/nbd6 00:22:10.279 01:52:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:22:10.279 01:52:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:22:10.279 01:52:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:22:10.279 01:52:10 -- common/autotest_common.sh@855 -- # local i 00:22:10.279 01:52:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:22:10.279 01:52:10 -- common/autotest_common.sh@859 -- # break 00:22:10.279 01:52:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:10.279 01:52:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:10.537 1+0 records in 00:22:10.537 1+0 records out 00:22:10.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552754 s, 7.4 MB/s 00:22:10.537 01:52:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.537 01:52:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:10.537 01:52:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.537 01:52:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:10.537 01:52:10 -- common/autotest_common.sh@875 -- # return 0 00:22:10.537 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:10.537 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:10.537 01:52:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:22:10.794 /dev/nbd7 00:22:10.794 01:52:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:22:10.794 01:52:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:22:10.794 01:52:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:22:10.794 01:52:10 -- common/autotest_common.sh@855 -- # local i 00:22:10.794 01:52:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:10.794 01:52:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:10.794 01:52:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:22:10.794 01:52:10 -- common/autotest_common.sh@859 -- # break 00:22:10.794 01:52:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:10.794 01:52:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:10.794 01:52:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:10.794 1+0 records in 00:22:10.794 1+0 records out 00:22:10.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641599 s, 6.4 MB/s 00:22:10.794 01:52:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.794 01:52:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:10.794 01:52:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:10.794 01:52:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:10.794 01:52:10 -- common/autotest_common.sh@875 -- # return 0 00:22:10.794 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:10.794 01:52:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:10.794 01:52:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:22:11.052 /dev/nbd8 00:22:11.052 01:52:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:22:11.052 01:52:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:22:11.052 01:52:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:22:11.052 01:52:10 -- common/autotest_common.sh@855 -- # local i 00:22:11.052 01:52:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:11.052 01:52:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:11.052 01:52:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:22:11.052 01:52:10 -- common/autotest_common.sh@859 -- # break 00:22:11.052 01:52:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:11.052 01:52:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:11.052 01:52:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.052 1+0 records in 00:22:11.052 1+0 records out 00:22:11.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921552 s, 4.4 MB/s 00:22:11.052 01:52:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.052 01:52:11 -- common/autotest_common.sh@872 -- # size=4096 00:22:11.052 01:52:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.052 01:52:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:11.052 01:52:11 -- common/autotest_common.sh@875 -- # return 0 00:22:11.052 01:52:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.052 01:52:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:11.052 01:52:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:22:11.310 /dev/nbd9 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:22:11.310 01:52:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:22:11.310 01:52:11 -- common/autotest_common.sh@855 -- # local i 00:22:11.310 01:52:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:11.310 01:52:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:11.310 01:52:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:22:11.310 01:52:11 -- common/autotest_common.sh@859 -- # break 00:22:11.310 01:52:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:11.310 01:52:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:11.310 01:52:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.310 1+0 records in 00:22:11.310 1+0 records out 00:22:11.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112053 s, 3.7 MB/s 00:22:11.310 01:52:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.310 01:52:11 -- common/autotest_common.sh@872 -- # size=4096 00:22:11.310 01:52:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.310 01:52:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:11.310 01:52:11 -- common/autotest_common.sh@875 -- # return 0 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:11.310 01:52:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:11.568 01:52:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd0", 00:22:11.568 "bdev_name": "Malloc0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd1", 00:22:11.568 "bdev_name": "Malloc1p0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd10", 00:22:11.568 "bdev_name": "Malloc1p1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd11", 00:22:11.568 "bdev_name": "Malloc2p0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd12", 00:22:11.568 "bdev_name": "Malloc2p1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd13", 00:22:11.568 "bdev_name": "Malloc2p2" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd14", 00:22:11.568 "bdev_name": "Malloc2p3" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd15", 00:22:11.568 "bdev_name": "Malloc2p4" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd2", 00:22:11.568 "bdev_name": "Malloc2p5" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd3", 00:22:11.568 "bdev_name": "Malloc2p6" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd4", 00:22:11.568 "bdev_name": "Malloc2p7" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd5", 00:22:11.568 "bdev_name": "TestPT" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd6", 00:22:11.568 "bdev_name": "raid0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd7", 00:22:11.568 "bdev_name": "concat0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd8", 00:22:11.568 "bdev_name": "raid1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd9", 00:22:11.568 "bdev_name": "AIO0" 00:22:11.568 } 00:22:11.568 ]' 00:22:11.568 01:52:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd0", 00:22:11.568 "bdev_name": "Malloc0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd1", 00:22:11.568 "bdev_name": "Malloc1p0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd10", 00:22:11.568 "bdev_name": "Malloc1p1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd11", 00:22:11.568 "bdev_name": "Malloc2p0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd12", 00:22:11.568 "bdev_name": "Malloc2p1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd13", 00:22:11.568 "bdev_name": "Malloc2p2" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd14", 00:22:11.568 "bdev_name": "Malloc2p3" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd15", 00:22:11.568 "bdev_name": "Malloc2p4" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd2", 00:22:11.568 "bdev_name": "Malloc2p5" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd3", 00:22:11.568 "bdev_name": "Malloc2p6" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd4", 00:22:11.568 "bdev_name": "Malloc2p7" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd5", 00:22:11.568 "bdev_name": "TestPT" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd6", 00:22:11.568 "bdev_name": "raid0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd7", 00:22:11.568 "bdev_name": "concat0" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd8", 00:22:11.568 "bdev_name": "raid1" 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "nbd_device": "/dev/nbd9", 00:22:11.568 "bdev_name": "AIO0" 00:22:11.568 } 00:22:11.568 ]' 00:22:11.568 01:52:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:11.568 01:52:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:11.568 /dev/nbd1 00:22:11.568 /dev/nbd10 00:22:11.568 /dev/nbd11 00:22:11.568 /dev/nbd12 00:22:11.568 /dev/nbd13 00:22:11.568 /dev/nbd14 00:22:11.568 /dev/nbd15 00:22:11.568 /dev/nbd2 00:22:11.568 /dev/nbd3 00:22:11.568 /dev/nbd4 00:22:11.568 /dev/nbd5 00:22:11.568 /dev/nbd6 00:22:11.568 /dev/nbd7 00:22:11.568 /dev/nbd8 00:22:11.568 /dev/nbd9' 00:22:11.568 01:52:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:11.568 /dev/nbd1 00:22:11.568 /dev/nbd10 00:22:11.569 /dev/nbd11 00:22:11.569 /dev/nbd12 00:22:11.569 /dev/nbd13 00:22:11.569 /dev/nbd14 00:22:11.569 /dev/nbd15 00:22:11.569 /dev/nbd2 00:22:11.569 /dev/nbd3 00:22:11.569 /dev/nbd4 00:22:11.569 /dev/nbd5 00:22:11.569 /dev/nbd6 00:22:11.569 /dev/nbd7 00:22:11.569 /dev/nbd8 00:22:11.569 /dev/nbd9' 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@65 -- # count=16 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@66 -- # echo 16 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@95 -- # count=16 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:11.569 256+0 records in 00:22:11.569 256+0 records out 00:22:11.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671199 s, 156 MB/s 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:11.569 01:52:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:11.827 256+0 records in 00:22:11.827 256+0 records out 00:22:11.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168617 s, 6.2 MB/s 00:22:11.827 01:52:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:11.827 01:52:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:12.085 256+0 records in 00:22:12.085 256+0 records out 00:22:12.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172267 s, 6.1 MB/s 00:22:12.085 01:52:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.085 01:52:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:22:12.085 256+0 records in 00:22:12.085 256+0 records out 00:22:12.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166364 s, 6.3 MB/s 00:22:12.085 01:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.085 01:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:22:12.343 256+0 records in 00:22:12.343 256+0 records out 00:22:12.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165589 s, 6.3 MB/s 00:22:12.343 01:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.343 01:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:22:12.601 256+0 records in 00:22:12.601 256+0 records out 00:22:12.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176602 s, 5.9 MB/s 00:22:12.601 01:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.601 01:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:22:12.601 256+0 records in 00:22:12.601 256+0 records out 00:22:12.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178059 s, 5.9 MB/s 00:22:12.601 01:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.601 01:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:22:12.863 256+0 records in 00:22:12.863 256+0 records out 00:22:12.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17432 s, 6.0 MB/s 00:22:12.863 01:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:12.863 01:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:22:13.122 256+0 records in 00:22:13.122 256+0 records out 00:22:13.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165008 s, 6.4 MB/s 00:22:13.122 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.122 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:22:13.122 256+0 records in 00:22:13.122 256+0 records out 00:22:13.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163841 s, 6.4 MB/s 00:22:13.122 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.122 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:22:13.380 256+0 records in 00:22:13.380 256+0 records out 00:22:13.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169347 s, 6.2 MB/s 00:22:13.380 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.380 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:22:13.639 256+0 records in 00:22:13.639 256+0 records out 00:22:13.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16922 s, 6.2 MB/s 00:22:13.639 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.639 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:22:13.639 256+0 records in 00:22:13.639 256+0 records out 00:22:13.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172643 s, 6.1 MB/s 00:22:13.639 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.639 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:22:13.897 256+0 records in 00:22:13.897 256+0 records out 00:22:13.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168219 s, 6.2 MB/s 00:22:13.897 01:52:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:13.897 01:52:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:22:14.154 256+0 records in 00:22:14.154 256+0 records out 00:22:14.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143559 s, 7.3 MB/s 00:22:14.155 01:52:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:14.155 01:52:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:22:14.155 256+0 records in 00:22:14.155 256+0 records out 00:22:14.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175316 s, 6.0 MB/s 00:22:14.155 01:52:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:14.155 01:52:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:22:14.413 256+0 records in 00:22:14.413 256+0 records out 00:22:14.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251935 s, 4.2 MB/s 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.413 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@51 -- # local i 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:14.671 01:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@41 -- # break 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:14.928 01:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@41 -- # break 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.186 01:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.187 01:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@41 -- # break 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.446 01:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@41 -- # break 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.728 01:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:15.986 01:52:16 -- bdev/nbd_common.sh@41 -- # break 00:22:15.987 01:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.987 01:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.987 01:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@41 -- # break 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.246 01:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@41 -- # break 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.505 01:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:22:16.763 01:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:22:16.763 01:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@41 -- # break 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.021 01:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@41 -- # break 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.279 01:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@41 -- # break 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.538 01:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@41 -- # break 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.797 01:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@41 -- # break 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.055 01:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@41 -- # break 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.313 01:52:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@41 -- # break 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.572 01:52:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@41 -- # break 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@41 -- # break 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:18.830 01:52:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:19.089 01:52:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:19.089 01:52:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:19.089 01:52:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@65 -- # true 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@65 -- # count=0 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@104 -- # count=0 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@109 -- # return 0 00:22:19.348 01:52:19 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:19.348 malloc_lvol_verify 00:22:19.348 01:52:19 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:19.606 52720315-bdb8-4e79-ba4f-8f979d1c6da2 00:22:19.864 01:52:19 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:19.864 997cf90c-f5a9-47dc-ab63-891e4a1f9827 00:22:19.864 01:52:19 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:20.123 /dev/nbd0 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:22:20.123 mke2fs 1.46.5 (30-Dec-2021) 00:22:20.123 00:22:20.123 Filesystem too small for a journal 00:22:20.123 Discarding device blocks: 0/1024 done 00:22:20.123 Creating filesystem with 1024 4k blocks and 1024 inodes 00:22:20.123 00:22:20.123 Allocating group tables: 0/1 done 00:22:20.123 Writing inode tables: 0/1 done 00:22:20.123 Writing superblocks and filesystem accounting information: 0/1 done 00:22:20.123 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@51 -- # local i 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.123 01:52:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@41 -- # break 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@45 -- # return 0 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:22:20.382 01:52:20 -- bdev/nbd_common.sh@147 -- # return 0 00:22:20.382 01:52:20 -- bdev/blockdev.sh@326 -- # killprocess 116845 00:22:20.382 01:52:20 -- common/autotest_common.sh@936 -- # '[' -z 116845 ']' 00:22:20.382 01:52:20 -- common/autotest_common.sh@940 -- # kill -0 116845 00:22:20.382 01:52:20 -- common/autotest_common.sh@941 -- # uname 00:22:20.382 01:52:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.382 01:52:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116845 00:22:20.382 01:52:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.382 01:52:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.382 01:52:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116845' 00:22:20.382 killing process with pid 116845 00:22:20.641 01:52:20 -- common/autotest_common.sh@955 -- # kill 116845 00:22:20.641 01:52:20 -- common/autotest_common.sh@960 -- # wait 116845 00:22:23.175 ************************************ 00:22:23.175 END TEST bdev_nbd 00:22:23.175 ************************************ 00:22:23.175 01:52:23 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:22:23.175 00:22:23.175 real 0m27.596s 00:22:23.175 user 0m34.957s 00:22:23.175 sys 0m11.852s 00:22:23.175 01:52:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:23.175 01:52:23 -- common/autotest_common.sh@10 -- # set +x 00:22:23.175 01:52:23 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:22:23.175 01:52:23 -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:22:23.175 01:52:23 -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:22:23.175 01:52:23 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:22:23.175 01:52:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:23.175 01:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:23.175 01:52:23 -- common/autotest_common.sh@10 -- # set +x 00:22:23.434 ************************************ 00:22:23.434 START TEST bdev_fio 00:22:23.434 ************************************ 00:22:23.434 01:52:23 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:22:23.434 01:52:23 -- bdev/blockdev.sh@331 -- # local env_context 00:22:23.434 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:23.434 01:52:23 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:23.434 01:52:23 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:23.434 01:52:23 -- bdev/blockdev.sh@339 -- # echo '' 00:22:23.434 01:52:23 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:22:23.434 01:52:23 -- bdev/blockdev.sh@339 -- # env_context= 00:22:23.434 01:52:23 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:23.434 01:52:23 -- common/autotest_common.sh@1267 -- # local workload=verify 00:22:23.434 01:52:23 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:22:23.434 01:52:23 -- common/autotest_common.sh@1269 -- # local env_context= 00:22:23.434 01:52:23 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:22:23.434 01:52:23 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:23.434 01:52:23 -- common/autotest_common.sh@1287 -- # cat 00:22:23.434 01:52:23 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1300 -- # cat 00:22:23.434 01:52:23 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:22:23.434 01:52:23 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:22:23.435 01:52:23 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:23.435 01:52:23 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:22:23.435 01:52:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:22:23.435 01:52:23 -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:22:23.435 01:52:23 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:23.435 01:52:23 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:23.435 01:52:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:22:23.435 01:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:23.435 01:52:23 -- common/autotest_common.sh@10 -- # set +x 00:22:23.435 ************************************ 00:22:23.435 START TEST bdev_fio_rw_verify 00:22:23.435 ************************************ 00:22:23.435 01:52:23 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:23.435 01:52:23 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:23.435 01:52:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:23.435 01:52:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:23.435 01:52:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:23.435 01:52:23 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.435 01:52:23 -- common/autotest_common.sh@1327 -- # shift 00:22:23.435 01:52:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:23.435 01:52:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.435 01:52:23 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.435 01:52:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:23.435 01:52:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:23.694 01:52:23 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:22:23.694 01:52:23 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:22:23.694 01:52:23 -- common/autotest_common.sh@1333 -- # break 00:22:23.694 01:52:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:23.694 01:52:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:23.694 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.694 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.695 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:23.695 fio-3.35 00:22:23.695 Starting 16 threads 00:22:35.926 00:22:35.926 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=118046: Wed Apr 24 01:52:35 2024 00:22:35.926 read: IOPS=66.6k, BW=260MiB/s (273MB/s)(2602MiB/10006msec) 00:22:35.926 slat (usec): min=2, max=47041, avg=44.06, stdev=495.82 00:22:35.926 clat (usec): min=7, max=47319, avg=363.37, stdev=1441.84 00:22:35.926 lat (usec): min=23, max=47356, avg=407.43, stdev=1524.21 00:22:35.926 clat percentiles (usec): 00:22:35.926 | 50.000th=[ 210], 99.000th=[ 3392], 99.900th=[16450], 99.990th=[28443], 00:22:35.926 | 99.999th=[37487] 00:22:35.926 write: IOPS=104k, BW=407MiB/s (426MB/s)(4032MiB/9916msec); 0 zone resets 00:22:35.926 slat (usec): min=5, max=55792, avg=74.52, stdev=703.30 00:22:35.926 clat (usec): min=8, max=56209, avg=453.81, stdev=1678.68 00:22:35.926 lat (usec): min=38, max=56269, avg=528.34, stdev=1820.02 00:22:35.926 clat percentiles (usec): 00:22:35.926 | 50.000th=[ 262], 99.000th=[10290], 99.900th=[20841], 99.990th=[35390], 00:22:35.926 | 99.999th=[49021] 00:22:35.926 bw ( KiB/s): min=245808, max=672169, per=99.08%, avg=412561.26, stdev=7721.03, samples=304 00:22:35.926 iops : min=61452, max=168041, avg=103140.16, stdev=1930.25, samples=304 00:22:35.926 lat (usec) : 10=0.01%, 20=0.01%, 50=0.47%, 100=7.00%, 250=46.32% 00:22:35.926 lat (usec) : 500=40.44%, 750=4.11%, 1000=0.27% 00:22:35.926 lat (msec) : 2=0.16%, 4=0.08%, 10=0.22%, 20=0.83%, 50=0.10% 00:22:35.926 lat (msec) : 100=0.01% 00:22:35.926 cpu : usr=56.55%, sys=2.01%, ctx=256667, majf=2, minf=71672 00:22:35.926 IO depths : 1=11.1%, 2=23.5%, 4=52.2%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.926 issued rwts: total=666147,1032237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:35.926 00:22:35.926 Run status group 0 (all jobs): 00:22:35.926 READ: bw=260MiB/s (273MB/s), 260MiB/s-260MiB/s (273MB/s-273MB/s), io=2602MiB (2729MB), run=10006-10006msec 00:22:35.926 WRITE: bw=407MiB/s (426MB/s), 407MiB/s-407MiB/s (426MB/s-426MB/s), io=4032MiB (4228MB), run=9916-9916msec 00:22:38.462 ----------------------------------------------------- 00:22:38.462 Suppressions used: 00:22:38.462 count bytes template 00:22:38.462 16 140 /usr/src/fio/parse.c 00:22:38.462 11296 1084416 /usr/src/fio/iolog.c 00:22:38.462 1 904 libcrypto.so 00:22:38.462 ----------------------------------------------------- 00:22:38.462 00:22:38.462 00:22:38.462 real 0m14.824s 00:22:38.462 user 1m37.186s 00:22:38.462 sys 0m4.239s 00:22:38.462 ************************************ 00:22:38.462 END TEST bdev_fio_rw_verify 00:22:38.462 01:52:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:38.462 01:52:38 -- common/autotest_common.sh@10 -- # set +x 00:22:38.462 ************************************ 00:22:38.462 01:52:38 -- bdev/blockdev.sh@350 -- # rm -f 00:22:38.462 01:52:38 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.462 01:52:38 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.462 01:52:38 -- common/autotest_common.sh@1267 -- # local workload=trim 00:22:38.462 01:52:38 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:22:38.462 01:52:38 -- common/autotest_common.sh@1269 -- # local env_context= 00:22:38.462 01:52:38 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:22:38.462 01:52:38 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.462 01:52:38 -- common/autotest_common.sh@1287 -- # cat 00:22:38.462 01:52:38 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:22:38.462 01:52:38 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:22:38.462 01:52:38 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:38.463 01:52:38 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d2782d55-130f-433a-af55-11308d0992f5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d2782d55-130f-433a-af55-11308d0992f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "932c4d42-a62c-52ce-bc87-b9490a25fa1d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "932c4d42-a62c-52ce-bc87-b9490a25fa1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd632f3e-5739-5d3c-b09e-e022279ed98f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd632f3e-5739-5d3c-b09e-e022279ed98f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d15989a3-027f-5373-86c3-244b40b0fc25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d15989a3-027f-5373-86c3-244b40b0fc25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "34ba9237-5a14-5851-abfc-bb718efff827"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34ba9237-5a14-5851-abfc-bb718efff827",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f1547b23-7469-5790-b43f-c1e1998cf72c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1547b23-7469-5790-b43f-c1e1998cf72c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "829a07bd-db64-54cf-bf79-e1992afd56b9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "829a07bd-db64-54cf-bf79-e1992afd56b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "afddec8a-fc37-59c6-87b8-a23e1659fe1c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "afddec8a-fc37-59c6-87b8-a23e1659fe1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0f82aebd-f805-5c7d-8ecb-d415872a1316"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f82aebd-f805-5c7d-8ecb-d415872a1316",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "15baaa49-2630-5956-9d32-21f3596b5143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15baaa49-2630-5956-9d32-21f3596b5143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b6c1693a-813d-5742-8cca-a222db95ce49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b6c1693a-813d-5742-8cca-a222db95ce49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d04716a6-e6b3-4e66-b055-1af08ac08208"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "1468de49-239a-4061-af7b-d581de7cf39b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f33ddecb-2f3a-4953-9591-d953fe0cc207",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1367b197-07fa-473f-b0e9-d0bf67a48bc3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "8dd46316-4925-4aa2-a7eb-f4aa679e74b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c8ec7219-73ea-434e-bb5b-9fe929613a11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "530d2e86-563a-4e91-9e83-33c54850a7f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "eda18a5a-b851-4d1d-ab01-8e709838f724",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7bce8a4e-372f-492d-b22d-af3c6e382dd3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:22:38.463 01:52:38 -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:22:38.463 Malloc1p0 00:22:38.463 Malloc1p1 00:22:38.463 Malloc2p0 00:22:38.463 Malloc2p1 00:22:38.463 Malloc2p2 00:22:38.463 Malloc2p3 00:22:38.463 Malloc2p4 00:22:38.463 Malloc2p5 00:22:38.463 Malloc2p6 00:22:38.463 Malloc2p7 00:22:38.463 TestPT 00:22:38.463 raid0 00:22:38.463 concat0 ]] 00:22:38.463 01:52:38 -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d2782d55-130f-433a-af55-11308d0992f5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d2782d55-130f-433a-af55-11308d0992f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "932c4d42-a62c-52ce-bc87-b9490a25fa1d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "932c4d42-a62c-52ce-bc87-b9490a25fa1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "bd632f3e-5739-5d3c-b09e-e022279ed98f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bd632f3e-5739-5d3c-b09e-e022279ed98f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d15989a3-027f-5373-86c3-244b40b0fc25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d15989a3-027f-5373-86c3-244b40b0fc25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "34ba9237-5a14-5851-abfc-bb718efff827"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34ba9237-5a14-5851-abfc-bb718efff827",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f1547b23-7469-5790-b43f-c1e1998cf72c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1547b23-7469-5790-b43f-c1e1998cf72c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "829a07bd-db64-54cf-bf79-e1992afd56b9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "829a07bd-db64-54cf-bf79-e1992afd56b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "afddec8a-fc37-59c6-87b8-a23e1659fe1c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "afddec8a-fc37-59c6-87b8-a23e1659fe1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0f82aebd-f805-5c7d-8ecb-d415872a1316"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f82aebd-f805-5c7d-8ecb-d415872a1316",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "15baaa49-2630-5956-9d32-21f3596b5143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "15baaa49-2630-5956-9d32-21f3596b5143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e573ec0b-c43a-5f48-ada5-e4f99b33dfb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b6c1693a-813d-5742-8cca-a222db95ce49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b6c1693a-813d-5742-8cca-a222db95ce49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d04716a6-e6b3-4e66-b055-1af08ac08208"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d04716a6-e6b3-4e66-b055-1af08ac08208",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "1468de49-239a-4061-af7b-d581de7cf39b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f33ddecb-2f3a-4953-9591-d953fe0cc207",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1367b197-07fa-473f-b0e9-d0bf67a48bc3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1367b197-07fa-473f-b0e9-d0bf67a48bc3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "8dd46316-4925-4aa2-a7eb-f4aa679e74b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c8ec7219-73ea-434e-bb5b-9fe929613a11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "530d2e86-563a-4e91-9e83-33c54850a7f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "530d2e86-563a-4e91-9e83-33c54850a7f8",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "eda18a5a-b851-4d1d-ab01-8e709838f724",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7bce8a4e-372f-492d-b22d-af3c6e382dd3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4d6b79cc-cd01-4b1c-b85c-0a71da2bae10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:22:38.465 01:52:38 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:22:38.465 01:52:38 -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:22:38.465 01:52:38 -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:22:38.465 01:52:38 -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:38.465 01:52:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:22:38.465 01:52:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.465 01:52:38 -- common/autotest_common.sh@10 -- # set +x 00:22:38.725 ************************************ 00:22:38.725 START TEST bdev_fio_trim 00:22:38.725 ************************************ 00:22:38.725 01:52:38 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:38.725 01:52:38 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:38.725 01:52:38 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:38.725 01:52:38 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.725 01:52:38 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:38.725 01:52:38 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.725 01:52:38 -- common/autotest_common.sh@1327 -- # shift 00:22:38.725 01:52:38 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:38.725 01:52:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.725 01:52:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.725 01:52:38 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:38.725 01:52:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:38.725 01:52:38 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:22:38.725 01:52:38 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:22:38.725 01:52:38 -- common/autotest_common.sh@1333 -- # break 00:22:38.725 01:52:38 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.725 01:52:38 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:38.725 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:38.725 fio-3.35 00:22:38.725 Starting 14 threads 00:22:51.046 00:22:51.046 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=118282: Wed Apr 24 01:52:50 2024 00:22:51.046 write: IOPS=113k, BW=441MiB/s (463MB/s)(4418MiB/10012msec); 0 zone resets 00:22:51.046 slat (usec): min=2, max=43709, avg=44.02, stdev=426.90 00:22:51.046 clat (usec): min=23, max=43900, avg=303.71, stdev=1124.04 00:22:51.046 lat (usec): min=36, max=43933, avg=347.73, stdev=1202.07 00:22:51.046 clat percentiles (usec): 00:22:51.046 | 50.000th=[ 210], 99.000th=[ 445], 99.900th=[16319], 99.990th=[20317], 00:22:51.046 | 99.999th=[28443] 00:22:51.046 bw ( KiB/s): min=309816, max=648968, per=99.98%, avg=451786.24, stdev=8321.76, samples=267 00:22:51.046 iops : min=77454, max=162242, avg=112946.49, stdev=2080.44, samples=267 00:22:51.046 trim: IOPS=113k, BW=441MiB/s (463MB/s)(4418MiB/10012msec); 0 zone resets 00:22:51.046 slat (usec): min=4, max=29605, avg=31.47, stdev=372.72 00:22:51.046 clat (usec): min=4, max=43934, avg=344.54, stdev=1199.94 00:22:51.046 lat (usec): min=13, max=43954, avg=376.00, stdev=1256.42 00:22:51.046 clat percentiles (usec): 00:22:51.046 | 50.000th=[ 241], 99.000th=[ 502], 99.900th=[16319], 99.990th=[20317], 00:22:51.046 | 99.999th=[28443] 00:22:51.046 bw ( KiB/s): min=309816, max=648968, per=99.98%, avg=451786.24, stdev=8321.71, samples=267 00:22:51.046 iops : min=77454, max=162242, avg=112946.49, stdev=2080.43, samples=267 00:22:51.046 lat (usec) : 10=0.01%, 20=0.01%, 50=0.32%, 100=3.85%, 250=56.10% 00:22:51.046 lat (usec) : 500=38.86%, 750=0.15%, 1000=0.01% 00:22:51.046 lat (msec) : 2=0.01%, 4=0.01%, 10=0.06%, 20=0.61%, 50=0.02% 00:22:51.046 cpu : usr=69.09%, sys=0.52%, ctx=170341, majf=0, minf=846 00:22:51.046 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:51.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.046 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.046 issued rwts: total=0,1131044,1131049,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.046 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:51.046 00:22:51.046 Run status group 0 (all jobs): 00:22:51.046 WRITE: bw=441MiB/s (463MB/s), 441MiB/s-441MiB/s (463MB/s-463MB/s), io=4418MiB (4633MB), run=10012-10012msec 00:22:51.046 TRIM: bw=441MiB/s (463MB/s), 441MiB/s-441MiB/s (463MB/s-463MB/s), io=4418MiB (4633MB), run=10012-10012msec 00:22:52.947 ----------------------------------------------------- 00:22:52.947 Suppressions used: 00:22:52.947 count bytes template 00:22:52.947 14 129 /usr/src/fio/parse.c 00:22:52.947 1 904 libcrypto.so 00:22:52.947 ----------------------------------------------------- 00:22:52.947 00:22:52.947 00:22:52.947 real 0m14.443s 00:22:52.947 user 1m42.595s 00:22:52.947 sys 0m1.729s 00:22:52.947 01:52:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:52.947 01:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:52.947 ************************************ 00:22:52.947 END TEST bdev_fio_trim 00:22:52.947 ************************************ 00:22:53.205 01:52:53 -- bdev/blockdev.sh@368 -- # rm -f 00:22:53.205 01:52:53 -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:53.205 01:52:53 -- bdev/blockdev.sh@370 -- # popd 00:22:53.205 /home/vagrant/spdk_repo/spdk 00:22:53.205 01:52:53 -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:22:53.205 00:22:53.205 real 0m29.755s 00:22:53.205 user 3m20.033s 00:22:53.205 sys 0m6.173s 00:22:53.205 01:52:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:53.205 01:52:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.205 ************************************ 00:22:53.205 END TEST bdev_fio 00:22:53.205 ************************************ 00:22:53.205 01:52:53 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:53.205 01:52:53 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:53.205 01:52:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:22:53.205 01:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:53.205 01:52:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.205 ************************************ 00:22:53.205 START TEST bdev_verify 00:22:53.205 ************************************ 00:22:53.205 01:52:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:53.205 [2024-04-24 01:52:53.223934] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:22:53.205 [2024-04-24 01:52:53.224077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118479 ] 00:22:53.462 [2024-04-24 01:52:53.396335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:53.719 [2024-04-24 01:52:53.689257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.719 [2024-04-24 01:52:53.689261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.285 [2024-04-24 01:52:54.141718] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:54.285 [2024-04-24 01:52:54.141844] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:22:54.285 [2024-04-24 01:52:54.149664] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:54.285 [2024-04-24 01:52:54.149743] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:22:54.285 [2024-04-24 01:52:54.157681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:54.285 [2024-04-24 01:52:54.157736] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:22:54.285 [2024-04-24 01:52:54.157771] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:22:54.543 [2024-04-24 01:52:54.390524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:22:54.543 [2024-04-24 01:52:54.390670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.543 [2024-04-24 01:52:54.390714] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:54.543 [2024-04-24 01:52:54.390737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.543 [2024-04-24 01:52:54.393596] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.543 [2024-04-24 01:52:54.393676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:22:54.804 Running I/O for 5 seconds... 00:23:01.391 00:23:01.391 Latency(us) 00:23:01.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.391 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x1000 00:23:01.391 Malloc0 : 5.21 1106.60 4.32 0.00 0.00 115427.57 694.37 351522.62 00:23:01.391 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x1000 length 0x1000 00:23:01.391 Malloc0 : 5.20 1082.06 4.23 0.00 0.00 118052.83 709.97 393465.66 00:23:01.391 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x800 00:23:01.391 Malloc1p0 : 5.26 584.57 2.28 0.00 0.00 217913.22 3073.95 174762.67 00:23:01.391 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x800 length 0x800 00:23:01.391 Malloc1p0 : 5.26 584.46 2.28 0.00 0.00 217986.68 3120.76 177758.60 00:23:01.391 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x800 00:23:01.391 Malloc1p1 : 5.26 584.35 2.28 0.00 0.00 217477.33 2964.72 173764.02 00:23:01.391 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x800 length 0x800 00:23:01.391 Malloc1p1 : 5.26 584.25 2.28 0.00 0.00 217508.25 3011.54 173764.02 00:23:01.391 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p0 : 5.26 584.14 2.28 0.00 0.00 217069.53 2793.08 173764.02 00:23:01.391 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p0 : 5.26 584.03 2.28 0.00 0.00 217083.52 2793.08 172765.38 00:23:01.391 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p1 : 5.26 583.93 2.28 0.00 0.00 216672.48 2746.27 171766.74 00:23:01.391 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p1 : 5.26 583.82 2.28 0.00 0.00 216646.19 2761.87 169769.45 00:23:01.391 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p2 : 5.26 583.72 2.28 0.00 0.00 216279.00 2559.02 170768.09 00:23:01.391 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p2 : 5.26 583.60 2.28 0.00 0.00 216239.10 2559.02 168770.80 00:23:01.391 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p3 : 5.26 583.51 2.28 0.00 0.00 215906.82 2246.95 171766.74 00:23:01.391 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p3 : 5.27 583.39 2.28 0.00 0.00 215840.59 2309.36 169769.45 00:23:01.391 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p4 : 5.27 583.30 2.28 0.00 0.00 215567.74 2106.51 173764.02 00:23:01.391 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p4 : 5.27 583.18 2.28 0.00 0.00 215487.99 2137.72 171766.74 00:23:01.391 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p5 : 5.27 583.09 2.28 0.00 0.00 215259.91 2012.89 172765.38 00:23:01.391 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p5 : 5.27 582.96 2.28 0.00 0.00 215164.51 2075.31 169769.45 00:23:01.391 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p6 : 5.27 582.88 2.28 0.00 0.00 214956.24 2044.10 175761.31 00:23:01.391 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p6 : 5.27 582.74 2.28 0.00 0.00 214840.55 2168.93 170768.09 00:23:01.391 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x200 00:23:01.391 Malloc2p7 : 5.27 582.65 2.28 0.00 0.00 214640.21 2122.12 177758.60 00:23:01.391 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x200 length 0x200 00:23:01.391 Malloc2p7 : 5.27 582.53 2.28 0.00 0.00 214515.60 1934.87 171766.74 00:23:01.391 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x1000 00:23:01.391 TestPT : 5.27 563.18 2.20 0.00 0.00 220525.41 9861.61 175761.31 00:23:01.391 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x1000 length 0x1000 00:23:01.391 TestPT : 5.28 557.77 2.18 0.00 0.00 223516.94 35951.18 249660.95 00:23:01.391 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x2000 00:23:01.391 raid0 : 5.28 582.02 2.27 0.00 0.00 213939.25 2309.36 168770.80 00:23:01.391 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x2000 length 0x2000 00:23:01.391 raid0 : 5.28 581.52 2.27 0.00 0.00 213911.79 2356.18 155788.43 00:23:01.391 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x2000 00:23:01.391 concat0 : 5.28 581.53 2.27 0.00 0.00 213701.09 2246.95 167772.16 00:23:01.391 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x2000 length 0x2000 00:23:01.391 concat0 : 5.29 581.03 2.27 0.00 0.00 213686.83 2340.57 156787.08 00:23:01.391 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x1000 00:23:01.391 raid1 : 5.29 581.04 2.27 0.00 0.00 213496.99 2886.70 164776.23 00:23:01.391 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x1000 length 0x1000 00:23:01.391 raid1 : 5.29 580.61 2.27 0.00 0.00 213416.74 2449.80 156787.08 00:23:01.391 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x0 length 0x4e2 00:23:01.391 AIO0 : 5.29 580.47 2.27 0.00 0.00 212942.30 4712.35 168770.80 00:23:01.391 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.391 Verification LBA range: start 0x4e2 length 0x4e2 00:23:01.391 AIO0 : 5.29 580.19 2.27 0.00 0.00 212866.09 2137.72 172765.38 00:23:01.391 =================================================================================================================== 00:23:01.391 Total : 19629.12 76.68 0.00 0.00 204893.42 694.37 393465.66 00:23:02.767 00:23:02.767 real 0m9.649s 00:23:02.767 user 0m16.944s 00:23:02.767 sys 0m0.500s 00:23:02.767 01:53:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:02.767 01:53:02 -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 ************************************ 00:23:02.767 END TEST bdev_verify 00:23:02.767 ************************************ 00:23:03.026 01:53:02 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:03.026 01:53:02 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:23:03.026 01:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.026 01:53:02 -- common/autotest_common.sh@10 -- # set +x 00:23:03.026 ************************************ 00:23:03.026 START TEST bdev_verify_big_io 00:23:03.026 ************************************ 00:23:03.026 01:53:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:03.026 [2024-04-24 01:53:03.002106] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:03.026 [2024-04-24 01:53:03.002343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118614 ] 00:23:03.284 [2024-04-24 01:53:03.190195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:03.542 [2024-04-24 01:53:03.408706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.542 [2024-04-24 01:53:03.408709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.800 [2024-04-24 01:53:03.828720] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:03.800 [2024-04-24 01:53:03.828810] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:03.800 [2024-04-24 01:53:03.836675] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:03.800 [2024-04-24 01:53:03.836730] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:03.800 [2024-04-24 01:53:03.844690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:03.800 [2024-04-24 01:53:03.844741] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:23:03.800 [2024-04-24 01:53:03.844774] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:23:04.057 [2024-04-24 01:53:04.062716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:04.057 [2024-04-24 01:53:04.062819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.057 [2024-04-24 01:53:04.062857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:04.057 [2024-04-24 01:53:04.062877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.057 [2024-04-24 01:53:04.065209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.057 [2024-04-24 01:53:04.065263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:23:04.629 [2024-04-24 01:53:04.478510] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.482498] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.486556] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.490906] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.494642] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.498943] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.502683] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.506906] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.510839] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.515042] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.518517] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.522509] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.526024] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.529996] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.533902] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.537382] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:23:04.629 [2024-04-24 01:53:04.628557] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:23:04.629 [2024-04-24 01:53:04.635562] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:23:04.629 Running I/O for 5 seconds... 00:23:11.189 00:23:11.189 Latency(us) 00:23:11.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.189 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x100 00:23:11.189 Malloc0 : 5.42 259.93 16.25 0.00 0.00 485642.43 628.05 1493971.14 00:23:11.189 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x100 length 0x100 00:23:11.189 Malloc0 : 5.44 258.67 16.17 0.00 0.00 486237.26 596.85 1725656.50 00:23:11.189 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x80 00:23:11.189 Malloc1p0 : 5.77 136.50 8.53 0.00 0.00 867435.73 1997.29 1765602.26 00:23:11.189 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x80 length 0x80 00:23:11.189 Malloc1p0 : 6.28 48.44 3.03 0.00 0.00 2403595.95 1263.91 3802835.63 00:23:11.189 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x80 00:23:11.189 Malloc1p1 : 5.96 53.72 3.36 0.00 0.00 2203582.74 1131.28 3675009.22 00:23:11.189 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x80 length 0x80 00:23:11.189 Malloc1p1 : 6.32 50.60 3.16 0.00 0.00 2263996.42 1217.10 3659030.92 00:23:11.189 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p0 : 5.78 38.78 2.42 0.00 0.00 756647.11 585.14 1302231.53 00:23:11.189 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p0 : 5.96 40.28 2.52 0.00 0.00 719440.66 526.63 1398101.33 00:23:11.189 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p1 : 5.78 38.77 2.42 0.00 0.00 751978.55 542.23 1286253.23 00:23:11.189 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p1 : 5.96 40.28 2.52 0.00 0.00 714538.74 530.53 1374133.88 00:23:11.189 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p2 : 5.78 38.77 2.42 0.00 0.00 747592.82 639.76 1262285.78 00:23:11.189 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p2 : 5.96 40.27 2.52 0.00 0.00 709555.47 561.74 1350166.43 00:23:11.189 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p3 : 5.85 41.03 2.56 0.00 0.00 708846.76 530.53 1246307.47 00:23:11.189 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p3 : 5.96 40.26 2.52 0.00 0.00 704562.93 538.33 1326198.98 00:23:11.189 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p4 : 5.85 41.02 2.56 0.00 0.00 704765.11 542.23 1230329.17 00:23:11.189 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p4 : 5.96 40.25 2.52 0.00 0.00 699366.68 542.23 1302231.53 00:23:11.189 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p5 : 5.85 41.02 2.56 0.00 0.00 700652.70 526.63 1214350.87 00:23:11.189 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p5 : 5.96 40.24 2.52 0.00 0.00 693903.86 542.23 1278264.08 00:23:11.189 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p6 : 5.85 41.01 2.56 0.00 0.00 696723.76 542.23 1198372.57 00:23:11.189 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p6 : 5.96 40.24 2.51 0.00 0.00 688805.18 542.23 1262285.78 00:23:11.189 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x20 00:23:11.189 Malloc2p7 : 5.85 41.00 2.56 0.00 0.00 692507.02 573.44 1182394.27 00:23:11.189 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x20 length 0x20 00:23:11.189 Malloc2p7 : 5.97 40.23 2.51 0.00 0.00 683403.01 573.44 1230329.17 00:23:11.189 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x100 00:23:11.189 TestPT : 6.16 52.63 3.29 0.00 0.00 2085165.19 76895.57 3131746.99 00:23:11.189 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x100 length 0x100 00:23:11.189 TestPT : 6.28 53.51 3.34 0.00 0.00 2004096.82 81888.79 2987942.28 00:23:11.189 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x200 00:23:11.189 raid0 : 6.20 56.76 3.55 0.00 0.00 1891830.50 1256.11 3323486.60 00:23:11.189 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x200 length 0x200 00:23:11.189 raid0 : 6.33 58.16 3.64 0.00 0.00 1797963.04 1295.12 3179681.89 00:23:11.189 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x0 length 0x200 00:23:11.189 concat0 : 6.16 62.36 3.90 0.00 0.00 1698222.48 1209.30 3211638.49 00:23:11.189 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.189 Verification LBA range: start 0x200 length 0x200 00:23:11.190 concat0 : 6.28 85.47 5.34 0.00 0.00 1208506.61 1224.90 3067833.78 00:23:11.190 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:11.190 Verification LBA range: start 0x0 length 0x100 00:23:11.190 raid1 : 6.16 72.72 4.54 0.00 0.00 1446405.69 1607.19 3099790.38 00:23:11.190 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:11.190 Verification LBA range: start 0x100 length 0x100 00:23:11.190 raid1 : 6.35 85.63 5.35 0.00 0.00 1177973.42 1591.59 2924029.07 00:23:11.190 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:23:11.190 Verification LBA range: start 0x0 length 0x4e 00:23:11.190 AIO0 : 6.21 81.55 5.10 0.00 0.00 772221.29 1185.89 1829515.46 00:23:11.190 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:23:11.190 Verification LBA range: start 0x4e length 0x4e 00:23:11.190 AIO0 : 6.44 113.27 7.08 0.00 0.00 532060.61 983.04 1717667.35 00:23:11.190 =================================================================================================================== 00:23:11.190 Total : 2173.36 135.84 0.00 0.00 1006435.22 526.63 3802835.63 00:23:14.476 00:23:14.476 real 0m11.128s 00:23:14.476 user 0m20.468s 00:23:14.476 sys 0m0.468s 00:23:14.476 01:53:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:14.476 01:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:14.476 ************************************ 00:23:14.476 END TEST bdev_verify_big_io 00:23:14.476 ************************************ 00:23:14.476 01:53:14 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:14.476 01:53:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:23:14.476 01:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:14.476 01:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:14.476 ************************************ 00:23:14.476 START TEST bdev_write_zeroes 00:23:14.476 ************************************ 00:23:14.476 01:53:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:14.476 [2024-04-24 01:53:14.238339] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:14.476 [2024-04-24 01:53:14.239079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118776 ] 00:23:14.477 [2024-04-24 01:53:14.414641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.735 [2024-04-24 01:53:14.627297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.994 [2024-04-24 01:53:15.059610] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:14.994 [2024-04-24 01:53:15.059701] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:23:14.994 [2024-04-24 01:53:15.067590] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:14.994 [2024-04-24 01:53:15.067641] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:23:14.994 [2024-04-24 01:53:15.075601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:14.994 [2024-04-24 01:53:15.075641] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:23:14.994 [2024-04-24 01:53:15.075670] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:23:15.252 [2024-04-24 01:53:15.300911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:15.252 [2024-04-24 01:53:15.301023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.252 [2024-04-24 01:53:15.301052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:15.252 [2024-04-24 01:53:15.301077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.252 [2024-04-24 01:53:15.303448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.252 [2024-04-24 01:53:15.303505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:23:15.819 Running I/O for 1 seconds... 00:23:16.754 00:23:16.754 Latency(us) 00:23:16.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.754 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc0 : 1.02 6388.39 24.95 0.00 0.00 20025.82 511.02 33204.91 00:23:16.754 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc1p0 : 1.02 6381.67 24.93 0.00 0.00 20018.12 690.47 32455.92 00:23:16.754 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc1p1 : 1.02 6375.20 24.90 0.00 0.00 20008.28 667.06 31831.77 00:23:16.754 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p0 : 1.03 6368.55 24.88 0.00 0.00 19996.13 670.96 31082.79 00:23:16.754 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p1 : 1.04 6390.10 24.96 0.00 0.00 19900.09 670.96 30583.47 00:23:16.754 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p2 : 1.04 6383.62 24.94 0.00 0.00 19884.47 670.96 30084.14 00:23:16.754 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p3 : 1.04 6377.41 24.91 0.00 0.00 19870.40 706.07 29459.99 00:23:16.754 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p4 : 1.04 6370.87 24.89 0.00 0.00 19861.33 686.57 28835.84 00:23:16.754 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p5 : 1.05 6364.49 24.86 0.00 0.00 19852.42 674.86 28336.52 00:23:16.754 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p6 : 1.05 6358.24 24.84 0.00 0.00 19830.97 667.06 27712.37 00:23:16.754 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 Malloc2p7 : 1.05 6352.04 24.81 0.00 0.00 19819.59 670.96 27213.04 00:23:16.754 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 TestPT : 1.05 6345.54 24.79 0.00 0.00 19810.32 694.37 26713.72 00:23:16.754 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.754 raid0 : 1.05 6338.12 24.76 0.00 0.00 19793.58 1209.30 25590.25 00:23:16.754 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.755 concat0 : 1.05 6331.11 24.73 0.00 0.00 19763.13 1217.10 24591.60 00:23:16.755 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.755 raid1 : 1.05 6322.13 24.70 0.00 0.00 19719.85 1942.67 22594.32 00:23:16.755 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:16.755 AIO0 : 1.05 6308.71 24.64 0.00 0.00 19681.38 1357.53 22344.66 00:23:16.755 =================================================================================================================== 00:23:16.755 Total : 101756.18 397.49 0.00 0.00 19864.04 511.02 33204.91 00:23:19.283 00:23:19.283 real 0m5.190s 00:23:19.283 user 0m4.588s 00:23:19.283 sys 0m0.408s 00:23:19.283 01:53:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.283 ************************************ 00:23:19.283 END TEST bdev_write_zeroes 00:23:19.283 ************************************ 00:23:19.283 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:23:19.541 01:53:19 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:19.541 01:53:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:23:19.541 01:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.541 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:23:19.541 ************************************ 00:23:19.541 START TEST bdev_json_nonenclosed 00:23:19.541 ************************************ 00:23:19.541 01:53:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:19.541 [2024-04-24 01:53:19.542264] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:19.541 [2024-04-24 01:53:19.542466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118866 ] 00:23:19.799 [2024-04-24 01:53:19.718402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.058 [2024-04-24 01:53:19.928686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.058 [2024-04-24 01:53:19.928811] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:20.058 [2024-04-24 01:53:19.928849] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:20.058 [2024-04-24 01:53:19.928873] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:20.316 00:23:20.316 real 0m0.923s 00:23:20.316 user 0m0.671s 00:23:20.316 sys 0m0.152s 00:23:20.316 01:53:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.317 01:53:20 -- common/autotest_common.sh@10 -- # set +x 00:23:20.317 ************************************ 00:23:20.317 END TEST bdev_json_nonenclosed 00:23:20.317 ************************************ 00:23:20.594 01:53:20 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:20.594 01:53:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:23:20.594 01:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.594 01:53:20 -- common/autotest_common.sh@10 -- # set +x 00:23:20.594 ************************************ 00:23:20.594 START TEST bdev_json_nonarray 00:23:20.594 ************************************ 00:23:20.594 01:53:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:20.594 [2024-04-24 01:53:20.576448] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:20.594 [2024-04-24 01:53:20.576653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118909 ] 00:23:20.851 [2024-04-24 01:53:20.753928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.109 [2024-04-24 01:53:20.967990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.109 [2024-04-24 01:53:20.968122] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:21.109 [2024-04-24 01:53:20.968175] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:21.109 [2024-04-24 01:53:20.968199] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.367 00:23:21.367 real 0m0.902s 00:23:21.367 user 0m0.654s 00:23:21.367 sys 0m0.148s 00:23:21.367 01:53:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:21.367 01:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.367 ************************************ 00:23:21.367 END TEST bdev_json_nonarray 00:23:21.367 ************************************ 00:23:21.625 01:53:21 -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:23:21.625 01:53:21 -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:23:21.625 01:53:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:21.625 01:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.625 01:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.625 ************************************ 00:23:21.625 START TEST bdev_qos 00:23:21.625 ************************************ 00:23:21.625 01:53:21 -- common/autotest_common.sh@1111 -- # qos_test_suite '' 00:23:21.625 01:53:21 -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:23:21.625 01:53:21 -- bdev/blockdev.sh@446 -- # QOS_PID=118945 00:23:21.625 Process qos testing pid: 118945 00:23:21.625 01:53:21 -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 118945' 00:23:21.625 01:53:21 -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:23:21.625 01:53:21 -- bdev/blockdev.sh@449 -- # waitforlisten 118945 00:23:21.625 01:53:21 -- common/autotest_common.sh@817 -- # '[' -z 118945 ']' 00:23:21.625 01:53:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.625 01:53:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:21.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.625 01:53:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.625 01:53:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:21.625 01:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:21.625 [2024-04-24 01:53:21.566825] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:21.625 [2024-04-24 01:53:21.566961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118945 ] 00:23:21.883 [2024-04-24 01:53:21.721638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.141 [2024-04-24 01:53:21.991544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.708 01:53:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.708 01:53:22 -- common/autotest_common.sh@850 -- # return 0 00:23:22.708 01:53:22 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:23:22.708 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.708 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.708 Malloc_0 00:23:22.708 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.708 01:53:22 -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:23:22.708 01:53:22 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:23:22.709 01:53:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:22.709 01:53:22 -- common/autotest_common.sh@887 -- # local i 00:23:22.709 01:53:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:22.709 01:53:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:22.709 01:53:22 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:23:22.709 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.709 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.709 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.709 01:53:22 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:23:22.709 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.709 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.709 [ 00:23:22.709 { 00:23:22.709 "name": "Malloc_0", 00:23:22.709 "aliases": [ 00:23:22.709 "cb4f7a02-923c-45f3-9fea-89dce9c80244" 00:23:22.709 ], 00:23:22.709 "product_name": "Malloc disk", 00:23:22.709 "block_size": 512, 00:23:22.709 "num_blocks": 262144, 00:23:22.709 "uuid": "cb4f7a02-923c-45f3-9fea-89dce9c80244", 00:23:22.709 "assigned_rate_limits": { 00:23:22.709 "rw_ios_per_sec": 0, 00:23:22.709 "rw_mbytes_per_sec": 0, 00:23:22.709 "r_mbytes_per_sec": 0, 00:23:22.709 "w_mbytes_per_sec": 0 00:23:22.709 }, 00:23:22.709 "claimed": false, 00:23:22.709 "zoned": false, 00:23:22.709 "supported_io_types": { 00:23:22.709 "read": true, 00:23:22.709 "write": true, 00:23:22.709 "unmap": true, 00:23:22.709 "write_zeroes": true, 00:23:22.709 "flush": true, 00:23:22.709 "reset": true, 00:23:22.709 "compare": false, 00:23:22.709 "compare_and_write": false, 00:23:22.709 "abort": true, 00:23:22.709 "nvme_admin": false, 00:23:22.709 "nvme_io": false 00:23:22.709 }, 00:23:22.709 "memory_domains": [ 00:23:22.709 { 00:23:22.709 "dma_device_id": "system", 00:23:22.709 "dma_device_type": 1 00:23:22.709 }, 00:23:22.709 { 00:23:22.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.709 "dma_device_type": 2 00:23:22.709 } 00:23:22.709 ], 00:23:22.709 "driver_specific": {} 00:23:22.709 } 00:23:22.709 ] 00:23:22.709 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.709 01:53:22 -- common/autotest_common.sh@893 -- # return 0 00:23:22.709 01:53:22 -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:23:22.709 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.709 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.709 Null_1 00:23:22.709 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.709 01:53:22 -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:23:22.709 01:53:22 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:23:22.709 01:53:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:22.709 01:53:22 -- common/autotest_common.sh@887 -- # local i 00:23:22.709 01:53:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:22.709 01:53:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:22.709 01:53:22 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:23:22.709 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.709 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.709 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.709 01:53:22 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:23:22.709 01:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.709 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:22.966 [ 00:23:22.966 { 00:23:22.966 "name": "Null_1", 00:23:22.966 "aliases": [ 00:23:22.966 "fe500a33-078f-420e-81f3-fae3ab65ce07" 00:23:22.966 ], 00:23:22.966 "product_name": "Null disk", 00:23:22.966 "block_size": 512, 00:23:22.966 "num_blocks": 262144, 00:23:22.966 "uuid": "fe500a33-078f-420e-81f3-fae3ab65ce07", 00:23:22.966 "assigned_rate_limits": { 00:23:22.966 "rw_ios_per_sec": 0, 00:23:22.966 "rw_mbytes_per_sec": 0, 00:23:22.966 "r_mbytes_per_sec": 0, 00:23:22.966 "w_mbytes_per_sec": 0 00:23:22.966 }, 00:23:22.966 "claimed": false, 00:23:22.966 "zoned": false, 00:23:22.966 "supported_io_types": { 00:23:22.966 "read": true, 00:23:22.966 "write": true, 00:23:22.966 "unmap": false, 00:23:22.966 "write_zeroes": true, 00:23:22.966 "flush": false, 00:23:22.966 "reset": true, 00:23:22.966 "compare": false, 00:23:22.966 "compare_and_write": false, 00:23:22.966 "abort": true, 00:23:22.966 "nvme_admin": false, 00:23:22.966 "nvme_io": false 00:23:22.966 }, 00:23:22.966 "driver_specific": {} 00:23:22.966 } 00:23:22.966 ] 00:23:22.966 01:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.966 01:53:22 -- common/autotest_common.sh@893 -- # return 0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@457 -- # qos_function_test 00:23:22.966 01:53:22 -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:23:22.966 01:53:22 -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:23:22.966 01:53:22 -- bdev/blockdev.sh@412 -- # local io_result=0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:22.966 01:53:22 -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:23:22.966 01:53:22 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:22.966 01:53:22 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:22.966 01:53:22 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:22.966 01:53:22 -- bdev/blockdev.sh@378 -- # tail -1 00:23:22.966 Running I/O for 60 seconds... 00:23:28.230 01:53:27 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 76840.12 307360.48 0.00 0.00 311296.00 0.00 0.00 ' 00:23:28.230 01:53:27 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:23:28.230 01:53:27 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:23:28.230 01:53:27 -- bdev/blockdev.sh@380 -- # iostat_result=76840.12 00:23:28.230 01:53:27 -- bdev/blockdev.sh@385 -- # echo 76840 00:23:28.230 01:53:28 -- bdev/blockdev.sh@416 -- # io_result=76840 00:23:28.230 01:53:28 -- bdev/blockdev.sh@418 -- # iops_limit=19000 00:23:28.230 01:53:28 -- bdev/blockdev.sh@419 -- # '[' 19000 -gt 1000 ']' 00:23:28.230 01:53:28 -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:23:28.230 01:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.230 01:53:28 -- common/autotest_common.sh@10 -- # set +x 00:23:28.230 01:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.230 01:53:28 -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:23:28.230 01:53:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:28.230 01:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:28.230 01:53:28 -- common/autotest_common.sh@10 -- # set +x 00:23:28.230 ************************************ 00:23:28.230 START TEST bdev_qos_iops 00:23:28.230 ************************************ 00:23:28.230 01:53:28 -- common/autotest_common.sh@1111 -- # run_qos_test 19000 IOPS Malloc_0 00:23:28.230 01:53:28 -- bdev/blockdev.sh@389 -- # local qos_limit=19000 00:23:28.230 01:53:28 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:28.230 01:53:28 -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:23:28.230 01:53:28 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:23:28.230 01:53:28 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:28.230 01:53:28 -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:28.230 01:53:28 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:28.230 01:53:28 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:28.230 01:53:28 -- bdev/blockdev.sh@378 -- # tail -1 00:23:33.496 01:53:33 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 18999.17 75996.69 0.00 0.00 77064.00 0.00 0.00 ' 00:23:33.496 01:53:33 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:23:33.496 01:53:33 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:23:33.496 01:53:33 -- bdev/blockdev.sh@380 -- # iostat_result=18999.17 00:23:33.496 01:53:33 -- bdev/blockdev.sh@385 -- # echo 18999 00:23:33.496 01:53:33 -- bdev/blockdev.sh@392 -- # qos_result=18999 00:23:33.496 01:53:33 -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:23:33.496 01:53:33 -- bdev/blockdev.sh@396 -- # lower_limit=17100 00:23:33.496 01:53:33 -- bdev/blockdev.sh@397 -- # upper_limit=20900 00:23:33.496 01:53:33 -- bdev/blockdev.sh@400 -- # '[' 18999 -lt 17100 ']' 00:23:33.496 01:53:33 -- bdev/blockdev.sh@400 -- # '[' 18999 -gt 20900 ']' 00:23:33.496 00:23:33.496 real 0m5.205s 00:23:33.496 user 0m0.118s 00:23:33.496 sys 0m0.035s 00:23:33.496 01:53:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.496 01:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:33.496 ************************************ 00:23:33.496 END TEST bdev_qos_iops 00:23:33.496 ************************************ 00:23:33.496 01:53:33 -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:23:33.496 01:53:33 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:33.496 01:53:33 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:23:33.496 01:53:33 -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:33.496 01:53:33 -- bdev/blockdev.sh@378 -- # grep Null_1 00:23:33.496 01:53:33 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:33.496 01:53:33 -- bdev/blockdev.sh@378 -- # tail -1 00:23:38.762 01:53:38 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 32372.92 129491.67 0.00 0.00 131072.00 0.00 0.00 ' 00:23:38.762 01:53:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:38.762 01:53:38 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:38.762 01:53:38 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:38.762 01:53:38 -- bdev/blockdev.sh@382 -- # iostat_result=131072.00 00:23:38.762 01:53:38 -- bdev/blockdev.sh@385 -- # echo 131072 00:23:38.762 01:53:38 -- bdev/blockdev.sh@427 -- # bw_limit=131072 00:23:38.762 01:53:38 -- bdev/blockdev.sh@428 -- # bw_limit=12 00:23:38.762 01:53:38 -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:23:38.762 01:53:38 -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:23:38.762 01:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.762 01:53:38 -- common/autotest_common.sh@10 -- # set +x 00:23:38.762 01:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.762 01:53:38 -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:23:38.762 01:53:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:38.762 01:53:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.762 01:53:38 -- common/autotest_common.sh@10 -- # set +x 00:23:38.762 ************************************ 00:23:38.762 START TEST bdev_qos_bw 00:23:38.762 ************************************ 00:23:38.762 01:53:38 -- common/autotest_common.sh@1111 -- # run_qos_test 12 BANDWIDTH Null_1 00:23:38.762 01:53:38 -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:23:38.762 01:53:38 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:38.762 01:53:38 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:23:38.762 01:53:38 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:38.762 01:53:38 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:23:38.762 01:53:38 -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:38.762 01:53:38 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:38.762 01:53:38 -- bdev/blockdev.sh@378 -- # grep Null_1 00:23:38.762 01:53:38 -- bdev/blockdev.sh@378 -- # tail -1 00:23:44.074 01:53:43 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3072.94 12291.74 0.00 0.00 12608.00 0.00 0.00 ' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@382 -- # iostat_result=12608.00 00:23:44.074 01:53:43 -- bdev/blockdev.sh@385 -- # echo 12608 00:23:44.074 01:53:43 -- bdev/blockdev.sh@392 -- # qos_result=12608 00:23:44.074 01:53:43 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:23:44.074 01:53:43 -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:23:44.074 01:53:43 -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:23:44.074 01:53:43 -- bdev/blockdev.sh@400 -- # '[' 12608 -lt 11059 ']' 00:23:44.074 01:53:43 -- bdev/blockdev.sh@400 -- # '[' 12608 -gt 13516 ']' 00:23:44.074 00:23:44.074 real 0m5.245s 00:23:44.074 user 0m0.119s 00:23:44.074 sys 0m0.031s 00:23:44.074 01:53:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:44.074 ************************************ 00:23:44.074 END TEST bdev_qos_bw 00:23:44.074 ************************************ 00:23:44.074 01:53:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.074 01:53:43 -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:23:44.074 01:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.074 01:53:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.074 01:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.074 01:53:43 -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:23:44.074 01:53:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:44.074 01:53:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:44.074 01:53:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.074 ************************************ 00:23:44.074 START TEST bdev_qos_ro_bw 00:23:44.074 ************************************ 00:23:44.074 01:53:44 -- common/autotest_common.sh@1111 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:23:44.074 01:53:44 -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:23:44.074 01:53:44 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:23:44.074 01:53:44 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:23:44.074 01:53:44 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:23:44.074 01:53:44 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:23:44.074 01:53:44 -- bdev/blockdev.sh@377 -- # local iostat_result 00:23:44.074 01:53:44 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:23:44.074 01:53:44 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:23:44.074 01:53:44 -- bdev/blockdev.sh@378 -- # tail -1 00:23:49.343 01:53:49 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.89 2047.56 0.00 0.00 2060.00 0.00 0.00 ' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:23:49.343 01:53:49 -- bdev/blockdev.sh@385 -- # echo 2060 00:23:49.343 01:53:49 -- bdev/blockdev.sh@392 -- # qos_result=2060 00:23:49.343 01:53:49 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:23:49.343 01:53:49 -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:23:49.343 01:53:49 -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:23:49.343 01:53:49 -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:23:49.343 01:53:49 -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:23:49.343 00:23:49.343 real 0m5.212s 00:23:49.343 user 0m0.157s 00:23:49.343 sys 0m0.028s 00:23:49.343 01:53:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:49.343 ************************************ 00:23:49.343 END TEST bdev_qos_ro_bw 00:23:49.343 01:53:49 -- common/autotest_common.sh@10 -- # set +x 00:23:49.343 ************************************ 00:23:49.343 01:53:49 -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:23:49.343 01:53:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.343 01:53:49 -- common/autotest_common.sh@10 -- # set +x 00:23:49.910 01:53:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.910 01:53:49 -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:23:49.910 01:53:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.910 01:53:49 -- common/autotest_common.sh@10 -- # set +x 00:23:50.169 00:23:50.169 Latency(us) 00:23:50.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.169 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:50.169 Malloc_0 : 26.90 26783.37 104.62 0.00 0.00 9468.10 2012.89 503316.48 00:23:50.169 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:50.169 Null_1 : 27.13 29378.72 114.76 0.00 0.00 8696.31 631.95 214708.42 00:23:50.169 =================================================================================================================== 00:23:50.169 Total : 56162.09 219.38 0.00 0.00 9062.75 631.95 503316.48 00:23:50.169 0 00:23:50.169 01:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.169 01:53:50 -- bdev/blockdev.sh@461 -- # killprocess 118945 00:23:50.169 01:53:50 -- common/autotest_common.sh@936 -- # '[' -z 118945 ']' 00:23:50.169 01:53:50 -- common/autotest_common.sh@940 -- # kill -0 118945 00:23:50.169 01:53:50 -- common/autotest_common.sh@941 -- # uname 00:23:50.169 01:53:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:50.169 01:53:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118945 00:23:50.169 01:53:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:50.169 killing process with pid 118945 00:23:50.169 01:53:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:50.169 01:53:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118945' 00:23:50.169 Received shutdown signal, test time was about 27.165647 seconds 00:23:50.169 00:23:50.169 Latency(us) 00:23:50.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.169 =================================================================================================================== 00:23:50.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.169 01:53:50 -- common/autotest_common.sh@955 -- # kill 118945 00:23:50.169 01:53:50 -- common/autotest_common.sh@960 -- # wait 118945 00:23:52.076 01:53:51 -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:23:52.076 00:23:52.076 real 0m30.177s 00:23:52.076 user 0m30.919s 00:23:52.076 sys 0m0.922s 00:23:52.076 01:53:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.076 01:53:51 -- common/autotest_common.sh@10 -- # set +x 00:23:52.076 ************************************ 00:23:52.076 END TEST bdev_qos 00:23:52.076 ************************************ 00:23:52.076 01:53:51 -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:23:52.076 01:53:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:52.076 01:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.076 01:53:51 -- common/autotest_common.sh@10 -- # set +x 00:23:52.076 ************************************ 00:23:52.076 START TEST bdev_qd_sampling 00:23:52.076 ************************************ 00:23:52.076 01:53:51 -- common/autotest_common.sh@1111 -- # qd_sampling_test_suite '' 00:23:52.076 01:53:51 -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:23:52.076 01:53:51 -- bdev/blockdev.sh@541 -- # QD_PID=119452 00:23:52.076 Process bdev QD sampling period testing pid: 119452 00:23:52.076 01:53:51 -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:23:52.077 01:53:51 -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 119452' 00:23:52.077 01:53:51 -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:23:52.077 01:53:51 -- bdev/blockdev.sh@544 -- # waitforlisten 119452 00:23:52.077 01:53:51 -- common/autotest_common.sh@817 -- # '[' -z 119452 ']' 00:23:52.077 01:53:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.077 01:53:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:52.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.077 01:53:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.077 01:53:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:52.077 01:53:51 -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 [2024-04-24 01:53:51.850728] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:52.077 [2024-04-24 01:53:51.850880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119452 ] 00:23:52.077 [2024-04-24 01:53:52.013184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:52.346 [2024-04-24 01:53:52.225391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.346 [2024-04-24 01:53:52.225392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.913 01:53:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:52.913 01:53:52 -- common/autotest_common.sh@850 -- # return 0 00:23:52.913 01:53:52 -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:23:52.913 01:53:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 Malloc_QD 00:23:52.913 01:53:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 01:53:52 -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:23:52.913 01:53:52 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:23:52.913 01:53:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:52.913 01:53:52 -- common/autotest_common.sh@887 -- # local i 00:23:52.913 01:53:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:52.913 01:53:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:52.913 01:53:52 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:23:52.913 01:53:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 01:53:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 01:53:52 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:23:52.913 01:53:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 [ 00:23:52.913 { 00:23:52.913 "name": "Malloc_QD", 00:23:52.913 "aliases": [ 00:23:52.913 "77078e1e-8430-4c58-b4c5-b5b04b84d9e0" 00:23:52.913 ], 00:23:52.913 "product_name": "Malloc disk", 00:23:52.913 "block_size": 512, 00:23:52.913 "num_blocks": 262144, 00:23:52.913 "uuid": "77078e1e-8430-4c58-b4c5-b5b04b84d9e0", 00:23:52.913 "assigned_rate_limits": { 00:23:52.913 "rw_ios_per_sec": 0, 00:23:52.913 "rw_mbytes_per_sec": 0, 00:23:52.913 "r_mbytes_per_sec": 0, 00:23:52.913 "w_mbytes_per_sec": 0 00:23:52.913 }, 00:23:52.913 "claimed": false, 00:23:52.913 "zoned": false, 00:23:52.913 "supported_io_types": { 00:23:52.913 "read": true, 00:23:52.913 "write": true, 00:23:52.913 "unmap": true, 00:23:52.913 "write_zeroes": true, 00:23:52.913 "flush": true, 00:23:52.913 "reset": true, 00:23:52.913 "compare": false, 00:23:52.913 "compare_and_write": false, 00:23:52.913 "abort": true, 00:23:52.913 "nvme_admin": false, 00:23:52.913 "nvme_io": false 00:23:52.913 }, 00:23:52.913 "memory_domains": [ 00:23:52.913 { 00:23:52.913 "dma_device_id": "system", 00:23:52.913 "dma_device_type": 1 00:23:52.913 }, 00:23:52.913 { 00:23:52.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.913 "dma_device_type": 2 00:23:52.913 } 00:23:52.913 ], 00:23:52.913 "driver_specific": {} 00:23:52.913 } 00:23:52.913 ] 00:23:52.913 01:53:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 01:53:52 -- common/autotest_common.sh@893 -- # return 0 00:23:52.913 01:53:52 -- bdev/blockdev.sh@550 -- # sleep 2 00:23:52.913 01:53:52 -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:52.913 Running I/O for 5 seconds... 00:23:55.442 01:53:54 -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:23:55.442 01:53:54 -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:23:55.442 01:53:54 -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:23:55.442 01:53:54 -- bdev/blockdev.sh@521 -- # local iostats 00:23:55.442 01:53:54 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:23:55.442 01:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.442 01:53:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.442 01:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.442 01:53:54 -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:23:55.442 01:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.442 01:53:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.442 01:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.442 01:53:54 -- bdev/blockdev.sh@525 -- # iostats='{ 00:23:55.442 "tick_rate": 2100000000, 00:23:55.442 "ticks": 1769083792314, 00:23:55.442 "bdevs": [ 00:23:55.442 { 00:23:55.442 "name": "Malloc_QD", 00:23:55.442 "bytes_read": 888181248, 00:23:55.442 "num_read_ops": 216835, 00:23:55.442 "bytes_written": 0, 00:23:55.442 "num_write_ops": 0, 00:23:55.442 "bytes_unmapped": 0, 00:23:55.442 "num_unmap_ops": 0, 00:23:55.442 "bytes_copied": 0, 00:23:55.442 "num_copy_ops": 0, 00:23:55.442 "read_latency_ticks": 2086128450944, 00:23:55.442 "max_read_latency_ticks": 12294894, 00:23:55.442 "min_read_latency_ticks": 297912, 00:23:55.442 "write_latency_ticks": 0, 00:23:55.442 "max_write_latency_ticks": 0, 00:23:55.442 "min_write_latency_ticks": 0, 00:23:55.442 "unmap_latency_ticks": 0, 00:23:55.442 "max_unmap_latency_ticks": 0, 00:23:55.442 "min_unmap_latency_ticks": 0, 00:23:55.442 "copy_latency_ticks": 0, 00:23:55.442 "max_copy_latency_ticks": 0, 00:23:55.442 "min_copy_latency_ticks": 0, 00:23:55.442 "io_error": {}, 00:23:55.442 "queue_depth_polling_period": 10, 00:23:55.442 "queue_depth": 512, 00:23:55.442 "io_time": 30, 00:23:55.442 "weighted_io_time": 15360 00:23:55.442 } 00:23:55.442 ] 00:23:55.442 }' 00:23:55.442 01:53:54 -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:23:55.442 01:53:54 -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:23:55.442 01:53:55 -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:23:55.442 01:53:55 -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:23:55.442 01:53:55 -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:23:55.442 01:53:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.442 01:53:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.442 00:23:55.442 Latency(us) 00:23:55.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.442 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:23:55.442 Malloc_QD : 2.02 54757.22 213.90 0.00 0.00 4664.15 1115.67 6054.28 00:23:55.443 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:23:55.443 Malloc_QD : 2.02 56758.56 221.71 0.00 0.00 4499.97 717.78 4930.80 00:23:55.443 =================================================================================================================== 00:23:55.443 Total : 111515.78 435.61 0.00 0.00 4580.56 717.78 6054.28 00:23:55.443 0 00:23:55.443 01:53:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.443 01:53:55 -- bdev/blockdev.sh@554 -- # killprocess 119452 00:23:55.443 01:53:55 -- common/autotest_common.sh@936 -- # '[' -z 119452 ']' 00:23:55.443 01:53:55 -- common/autotest_common.sh@940 -- # kill -0 119452 00:23:55.443 01:53:55 -- common/autotest_common.sh@941 -- # uname 00:23:55.443 01:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:55.443 01:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119452 00:23:55.443 01:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:55.443 01:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:55.443 killing process with pid 119452 00:23:55.443 01:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119452' 00:23:55.443 01:53:55 -- common/autotest_common.sh@955 -- # kill 119452 00:23:55.443 Received shutdown signal, test time was about 2.188577 seconds 00:23:55.443 00:23:55.443 Latency(us) 00:23:55.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.443 =================================================================================================================== 00:23:55.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.443 01:53:55 -- common/autotest_common.sh@960 -- # wait 119452 00:23:56.818 01:53:56 -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:23:56.818 00:23:56.818 real 0m5.002s 00:23:56.818 user 0m9.198s 00:23:56.818 sys 0m0.377s 00:23:56.818 01:53:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:56.818 ************************************ 00:23:56.818 END TEST bdev_qd_sampling 00:23:56.818 01:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 ************************************ 00:23:56.818 01:53:56 -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:23:56.818 01:53:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:56.818 01:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:56.818 01:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:57.076 ************************************ 00:23:57.076 START TEST bdev_error 00:23:57.076 ************************************ 00:23:57.076 01:53:56 -- common/autotest_common.sh@1111 -- # error_test_suite '' 00:23:57.076 01:53:56 -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:23:57.076 01:53:56 -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:23:57.076 01:53:56 -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:23:57.076 01:53:56 -- bdev/blockdev.sh@472 -- # ERR_PID=119552 00:23:57.076 01:53:56 -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 119552' 00:23:57.076 Process error testing pid: 119552 00:23:57.076 01:53:56 -- bdev/blockdev.sh@474 -- # waitforlisten 119552 00:23:57.076 01:53:56 -- common/autotest_common.sh@817 -- # '[' -z 119552 ']' 00:23:57.076 01:53:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.076 01:53:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:57.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.076 01:53:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.076 01:53:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:57.076 01:53:56 -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:23:57.076 01:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:57.076 [2024-04-24 01:53:56.982328] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:23:57.076 [2024-04-24 01:53:56.982750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119552 ] 00:23:57.076 [2024-04-24 01:53:57.156871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.335 [2024-04-24 01:53:57.368728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.903 01:53:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:57.904 01:53:57 -- common/autotest_common.sh@850 -- # return 0 00:23:57.904 01:53:57 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:23:57.904 01:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.904 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:57.904 Dev_1 00:23:57.904 01:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.904 01:53:57 -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:23:57.904 01:53:57 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:23:57.904 01:53:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:57.904 01:53:57 -- common/autotest_common.sh@887 -- # local i 00:23:57.904 01:53:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:57.904 01:53:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:57.904 01:53:57 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:23:57.904 01:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.904 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:57.904 01:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.904 01:53:57 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:23:57.904 01:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.904 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:57.904 [ 00:23:57.904 { 00:23:57.904 "name": "Dev_1", 00:23:57.904 "aliases": [ 00:23:57.904 "959534c3-e3b8-4dc3-a496-6f3b8f1e8923" 00:23:57.904 ], 00:23:57.904 "product_name": "Malloc disk", 00:23:57.904 "block_size": 512, 00:23:57.904 "num_blocks": 262144, 00:23:57.904 "uuid": "959534c3-e3b8-4dc3-a496-6f3b8f1e8923", 00:23:57.904 "assigned_rate_limits": { 00:23:57.904 "rw_ios_per_sec": 0, 00:23:57.904 "rw_mbytes_per_sec": 0, 00:23:57.904 "r_mbytes_per_sec": 0, 00:23:57.904 "w_mbytes_per_sec": 0 00:23:57.904 }, 00:23:57.904 "claimed": false, 00:23:57.904 "zoned": false, 00:23:57.904 "supported_io_types": { 00:23:57.904 "read": true, 00:23:57.904 "write": true, 00:23:57.904 "unmap": true, 00:23:57.904 "write_zeroes": true, 00:23:57.904 "flush": true, 00:23:57.904 "reset": true, 00:23:57.904 "compare": false, 00:23:57.904 "compare_and_write": false, 00:23:57.904 "abort": true, 00:23:57.904 "nvme_admin": false, 00:23:57.904 "nvme_io": false 00:23:57.904 }, 00:23:57.904 "memory_domains": [ 00:23:57.904 { 00:23:57.904 "dma_device_id": "system", 00:23:57.904 "dma_device_type": 1 00:23:57.904 }, 00:23:57.904 { 00:23:57.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.904 "dma_device_type": 2 00:23:57.904 } 00:23:57.904 ], 00:23:57.904 "driver_specific": {} 00:23:57.904 } 00:23:57.904 ] 00:23:57.904 01:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.904 01:53:57 -- common/autotest_common.sh@893 -- # return 0 00:23:57.904 01:53:57 -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:23:57.904 01:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.904 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:57.904 true 00:23:57.904 01:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.904 01:53:57 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:23:57.904 01:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.904 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 Dev_2 00:23:58.164 01:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.164 01:53:58 -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:23:58.164 01:53:58 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:23:58.164 01:53:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:58.164 01:53:58 -- common/autotest_common.sh@887 -- # local i 00:23:58.164 01:53:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:58.164 01:53:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:58.164 01:53:58 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:23:58.164 01:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.164 01:53:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 01:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.164 01:53:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:23:58.164 01:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.164 01:53:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 [ 00:23:58.164 { 00:23:58.164 "name": "Dev_2", 00:23:58.164 "aliases": [ 00:23:58.164 "a4d3bae3-c314-4990-a882-8152bdb56d86" 00:23:58.164 ], 00:23:58.164 "product_name": "Malloc disk", 00:23:58.164 "block_size": 512, 00:23:58.164 "num_blocks": 262144, 00:23:58.164 "uuid": "a4d3bae3-c314-4990-a882-8152bdb56d86", 00:23:58.164 "assigned_rate_limits": { 00:23:58.164 "rw_ios_per_sec": 0, 00:23:58.164 "rw_mbytes_per_sec": 0, 00:23:58.164 "r_mbytes_per_sec": 0, 00:23:58.164 "w_mbytes_per_sec": 0 00:23:58.164 }, 00:23:58.164 "claimed": false, 00:23:58.164 "zoned": false, 00:23:58.164 "supported_io_types": { 00:23:58.164 "read": true, 00:23:58.164 "write": true, 00:23:58.164 "unmap": true, 00:23:58.164 "write_zeroes": true, 00:23:58.164 "flush": true, 00:23:58.164 "reset": true, 00:23:58.164 "compare": false, 00:23:58.164 "compare_and_write": false, 00:23:58.164 "abort": true, 00:23:58.164 "nvme_admin": false, 00:23:58.164 "nvme_io": false 00:23:58.164 }, 00:23:58.164 "memory_domains": [ 00:23:58.164 { 00:23:58.164 "dma_device_id": "system", 00:23:58.164 "dma_device_type": 1 00:23:58.164 }, 00:23:58.164 { 00:23:58.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.164 "dma_device_type": 2 00:23:58.164 } 00:23:58.164 ], 00:23:58.164 "driver_specific": {} 00:23:58.164 } 00:23:58.164 ] 00:23:58.164 01:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.164 01:53:58 -- common/autotest_common.sh@893 -- # return 0 00:23:58.164 01:53:58 -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:23:58.164 01:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.164 01:53:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 01:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.164 01:53:58 -- bdev/blockdev.sh@484 -- # sleep 1 00:23:58.164 01:53:58 -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:23:58.423 Running I/O for 5 seconds... 00:23:59.361 01:53:59 -- bdev/blockdev.sh@487 -- # kill -0 119552 00:23:59.361 Process is existed as continue on error is set. Pid: 119552 00:23:59.361 01:53:59 -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 119552' 00:23:59.361 01:53:59 -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:23:59.361 01:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.361 01:53:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.361 01:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.361 01:53:59 -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:23:59.361 01:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.361 01:53:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.361 Timeout while waiting for response: 00:23:59.361 00:23:59.361 00:23:59.620 01:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.620 01:53:59 -- bdev/blockdev.sh@497 -- # sleep 5 00:24:03.827 00:24:03.827 Latency(us) 00:24:03.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.827 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:24:03.827 EE_Dev_1 : 0.90 48649.97 190.04 5.56 0.00 326.45 128.73 643.66 00:24:03.828 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:24:03.828 Dev_2 : 5.00 96999.48 378.90 0.00 0.00 162.54 55.83 385476.51 00:24:03.828 =================================================================================================================== 00:24:03.828 Total : 145649.45 568.94 5.56 0.00 176.10 55.83 385476.51 00:24:04.766 01:54:04 -- bdev/blockdev.sh@499 -- # killprocess 119552 00:24:04.766 01:54:04 -- common/autotest_common.sh@936 -- # '[' -z 119552 ']' 00:24:04.766 01:54:04 -- common/autotest_common.sh@940 -- # kill -0 119552 00:24:04.766 01:54:04 -- common/autotest_common.sh@941 -- # uname 00:24:04.766 01:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.766 01:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119552 00:24:04.766 01:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:04.766 01:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:04.766 01:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119552' 00:24:04.766 killing process with pid 119552 00:24:04.766 Received shutdown signal, test time was about 5.000000 seconds 00:24:04.766 00:24:04.766 Latency(us) 00:24:04.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.766 =================================================================================================================== 00:24:04.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.766 01:54:04 -- common/autotest_common.sh@955 -- # kill 119552 00:24:04.766 01:54:04 -- common/autotest_common.sh@960 -- # wait 119552 00:24:06.672 01:54:06 -- bdev/blockdev.sh@503 -- # ERR_PID=119678 00:24:06.672 01:54:06 -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 119678' 00:24:06.672 Process error testing pid: 119678 00:24:06.672 01:54:06 -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:24:06.672 01:54:06 -- bdev/blockdev.sh@505 -- # waitforlisten 119678 00:24:06.672 01:54:06 -- common/autotest_common.sh@817 -- # '[' -z 119678 ']' 00:24:06.672 01:54:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.673 01:54:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:06.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.673 01:54:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.673 01:54:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:06.673 01:54:06 -- common/autotest_common.sh@10 -- # set +x 00:24:06.673 [2024-04-24 01:54:06.400813] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:06.673 [2024-04-24 01:54:06.400943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119678 ] 00:24:06.673 [2024-04-24 01:54:06.560697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.932 [2024-04-24 01:54:06.774569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.499 01:54:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:07.499 01:54:07 -- common/autotest_common.sh@850 -- # return 0 00:24:07.499 01:54:07 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:24:07.499 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.499 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.499 Dev_1 00:24:07.499 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.499 01:54:07 -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:24:07.499 01:54:07 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:24:07.499 01:54:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:07.499 01:54:07 -- common/autotest_common.sh@887 -- # local i 00:24:07.499 01:54:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:07.499 01:54:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:07.499 01:54:07 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:24:07.499 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.499 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.499 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.499 01:54:07 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:24:07.499 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.499 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.499 [ 00:24:07.499 { 00:24:07.499 "name": "Dev_1", 00:24:07.499 "aliases": [ 00:24:07.499 "4e5b6ffb-0212-4054-837d-7c7cad641fa4" 00:24:07.499 ], 00:24:07.499 "product_name": "Malloc disk", 00:24:07.499 "block_size": 512, 00:24:07.499 "num_blocks": 262144, 00:24:07.499 "uuid": "4e5b6ffb-0212-4054-837d-7c7cad641fa4", 00:24:07.499 "assigned_rate_limits": { 00:24:07.499 "rw_ios_per_sec": 0, 00:24:07.499 "rw_mbytes_per_sec": 0, 00:24:07.499 "r_mbytes_per_sec": 0, 00:24:07.499 "w_mbytes_per_sec": 0 00:24:07.499 }, 00:24:07.499 "claimed": false, 00:24:07.499 "zoned": false, 00:24:07.499 "supported_io_types": { 00:24:07.499 "read": true, 00:24:07.499 "write": true, 00:24:07.499 "unmap": true, 00:24:07.499 "write_zeroes": true, 00:24:07.499 "flush": true, 00:24:07.499 "reset": true, 00:24:07.499 "compare": false, 00:24:07.499 "compare_and_write": false, 00:24:07.499 "abort": true, 00:24:07.499 "nvme_admin": false, 00:24:07.499 "nvme_io": false 00:24:07.499 }, 00:24:07.499 "memory_domains": [ 00:24:07.499 { 00:24:07.499 "dma_device_id": "system", 00:24:07.499 "dma_device_type": 1 00:24:07.499 }, 00:24:07.499 { 00:24:07.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.499 "dma_device_type": 2 00:24:07.499 } 00:24:07.499 ], 00:24:07.499 "driver_specific": {} 00:24:07.499 } 00:24:07.499 ] 00:24:07.499 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.499 01:54:07 -- common/autotest_common.sh@893 -- # return 0 00:24:07.499 01:54:07 -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:24:07.499 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.499 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.499 true 00:24:07.499 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.499 01:54:07 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:24:07.499 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.499 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.760 Dev_2 00:24:07.760 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.760 01:54:07 -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:24:07.760 01:54:07 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:24:07.760 01:54:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:07.760 01:54:07 -- common/autotest_common.sh@887 -- # local i 00:24:07.760 01:54:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:07.760 01:54:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:07.760 01:54:07 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:24:07.760 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.760 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.760 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.760 01:54:07 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:24:07.760 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.760 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.760 [ 00:24:07.760 { 00:24:07.760 "name": "Dev_2", 00:24:07.760 "aliases": [ 00:24:07.760 "7e6c16ed-7912-414f-be9c-98b4edcdf7c4" 00:24:07.760 ], 00:24:07.760 "product_name": "Malloc disk", 00:24:07.760 "block_size": 512, 00:24:07.760 "num_blocks": 262144, 00:24:07.760 "uuid": "7e6c16ed-7912-414f-be9c-98b4edcdf7c4", 00:24:07.760 "assigned_rate_limits": { 00:24:07.760 "rw_ios_per_sec": 0, 00:24:07.760 "rw_mbytes_per_sec": 0, 00:24:07.760 "r_mbytes_per_sec": 0, 00:24:07.760 "w_mbytes_per_sec": 0 00:24:07.760 }, 00:24:07.760 "claimed": false, 00:24:07.760 "zoned": false, 00:24:07.760 "supported_io_types": { 00:24:07.760 "read": true, 00:24:07.760 "write": true, 00:24:07.760 "unmap": true, 00:24:07.760 "write_zeroes": true, 00:24:07.760 "flush": true, 00:24:07.760 "reset": true, 00:24:07.760 "compare": false, 00:24:07.760 "compare_and_write": false, 00:24:07.760 "abort": true, 00:24:07.760 "nvme_admin": false, 00:24:07.760 "nvme_io": false 00:24:07.760 }, 00:24:07.760 "memory_domains": [ 00:24:07.760 { 00:24:07.760 "dma_device_id": "system", 00:24:07.760 "dma_device_type": 1 00:24:07.760 }, 00:24:07.760 { 00:24:07.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.760 "dma_device_type": 2 00:24:07.760 } 00:24:07.760 ], 00:24:07.760 "driver_specific": {} 00:24:07.760 } 00:24:07.760 ] 00:24:07.760 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.760 01:54:07 -- common/autotest_common.sh@893 -- # return 0 00:24:07.760 01:54:07 -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:24:07.760 01:54:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.760 01:54:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.760 01:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.760 01:54:07 -- bdev/blockdev.sh@515 -- # NOT wait 119678 00:24:07.760 01:54:07 -- common/autotest_common.sh@638 -- # local es=0 00:24:07.760 01:54:07 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 119678 00:24:07.760 01:54:07 -- common/autotest_common.sh@626 -- # local arg=wait 00:24:07.760 01:54:07 -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:24:07.760 01:54:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.760 01:54:07 -- common/autotest_common.sh@630 -- # type -t wait 00:24:07.760 01:54:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.760 01:54:07 -- common/autotest_common.sh@641 -- # wait 119678 00:24:07.760 Running I/O for 5 seconds... 00:24:07.760 task offset: 228608 on job bdev=EE_Dev_1 fails 00:24:07.760 00:24:07.760 Latency(us) 00:24:07.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.760 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:24:07.760 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:24:07.760 EE_Dev_1 : 0.00 33690.66 131.60 7656.97 0.00 316.52 114.10 573.44 00:24:07.760 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:24:07.760 Dev_2 : 0.00 22824.54 89.16 0.00 0.00 497.75 115.57 916.72 00:24:07.760 =================================================================================================================== 00:24:07.760 Total : 56515.19 220.76 7656.97 0.00 414.82 114.10 916.72 00:24:07.760 [2024-04-24 01:54:07.789715] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:07.760 request: 00:24:07.760 { 00:24:07.760 "method": "perform_tests", 00:24:07.760 "req_id": 1 00:24:07.760 } 00:24:07.760 Got JSON-RPC error response 00:24:07.760 response: 00:24:07.760 { 00:24:07.760 "code": -32603, 00:24:07.760 "message": "bdevperf failed with error Operation not permitted" 00:24:07.760 } 00:24:10.296 01:54:09 -- common/autotest_common.sh@641 -- # es=255 00:24:10.296 01:54:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:10.296 01:54:09 -- common/autotest_common.sh@650 -- # es=127 00:24:10.296 01:54:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:24:10.296 01:54:09 -- common/autotest_common.sh@658 -- # es=1 00:24:10.296 01:54:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:10.296 00:24:10.296 real 0m12.964s 00:24:10.296 user 0m13.083s 00:24:10.296 sys 0m0.770s 00:24:10.296 01:54:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:10.296 01:54:09 -- common/autotest_common.sh@10 -- # set +x 00:24:10.296 ************************************ 00:24:10.296 END TEST bdev_error 00:24:10.296 ************************************ 00:24:10.296 01:54:09 -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:24:10.296 01:54:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:10.296 01:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.296 01:54:09 -- common/autotest_common.sh@10 -- # set +x 00:24:10.296 ************************************ 00:24:10.296 START TEST bdev_stat 00:24:10.296 ************************************ 00:24:10.296 01:54:09 -- common/autotest_common.sh@1111 -- # stat_test_suite '' 00:24:10.296 01:54:09 -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:24:10.296 01:54:09 -- bdev/blockdev.sh@596 -- # STAT_PID=119753 00:24:10.296 Process Bdev IO statistics testing pid: 119753 00:24:10.296 01:54:09 -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 119753' 00:24:10.296 01:54:09 -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:24:10.296 01:54:09 -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:24:10.296 01:54:09 -- bdev/blockdev.sh@599 -- # waitforlisten 119753 00:24:10.296 01:54:09 -- common/autotest_common.sh@817 -- # '[' -z 119753 ']' 00:24:10.296 01:54:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.296 01:54:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:10.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.296 01:54:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.296 01:54:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:10.296 01:54:09 -- common/autotest_common.sh@10 -- # set +x 00:24:10.296 [2024-04-24 01:54:10.053263] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:10.296 [2024-04-24 01:54:10.053445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119753 ] 00:24:10.296 [2024-04-24 01:54:10.241656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.555 [2024-04-24 01:54:10.495130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.555 [2024-04-24 01:54:10.495132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.884 01:54:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:10.884 01:54:10 -- common/autotest_common.sh@850 -- # return 0 00:24:10.884 01:54:10 -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:24:10.884 01:54:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.884 01:54:10 -- common/autotest_common.sh@10 -- # set +x 00:24:11.157 Malloc_STAT 00:24:11.157 01:54:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.157 01:54:11 -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:24:11.157 01:54:11 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:24:11.157 01:54:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:11.157 01:54:11 -- common/autotest_common.sh@887 -- # local i 00:24:11.157 01:54:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:11.157 01:54:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:11.157 01:54:11 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:24:11.157 01:54:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.157 01:54:11 -- common/autotest_common.sh@10 -- # set +x 00:24:11.157 01:54:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.157 01:54:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:24:11.157 01:54:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.157 01:54:11 -- common/autotest_common.sh@10 -- # set +x 00:24:11.157 [ 00:24:11.157 { 00:24:11.157 "name": "Malloc_STAT", 00:24:11.157 "aliases": [ 00:24:11.157 "9aee792e-f6fa-4713-b1ac-6f6f1bb6cc0b" 00:24:11.157 ], 00:24:11.157 "product_name": "Malloc disk", 00:24:11.157 "block_size": 512, 00:24:11.157 "num_blocks": 262144, 00:24:11.157 "uuid": "9aee792e-f6fa-4713-b1ac-6f6f1bb6cc0b", 00:24:11.157 "assigned_rate_limits": { 00:24:11.157 "rw_ios_per_sec": 0, 00:24:11.157 "rw_mbytes_per_sec": 0, 00:24:11.157 "r_mbytes_per_sec": 0, 00:24:11.157 "w_mbytes_per_sec": 0 00:24:11.157 }, 00:24:11.157 "claimed": false, 00:24:11.157 "zoned": false, 00:24:11.157 "supported_io_types": { 00:24:11.157 "read": true, 00:24:11.157 "write": true, 00:24:11.157 "unmap": true, 00:24:11.157 "write_zeroes": true, 00:24:11.157 "flush": true, 00:24:11.157 "reset": true, 00:24:11.157 "compare": false, 00:24:11.157 "compare_and_write": false, 00:24:11.157 "abort": true, 00:24:11.157 "nvme_admin": false, 00:24:11.157 "nvme_io": false 00:24:11.157 }, 00:24:11.157 "memory_domains": [ 00:24:11.157 { 00:24:11.157 "dma_device_id": "system", 00:24:11.157 "dma_device_type": 1 00:24:11.157 }, 00:24:11.157 { 00:24:11.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.157 "dma_device_type": 2 00:24:11.157 } 00:24:11.157 ], 00:24:11.157 "driver_specific": {} 00:24:11.157 } 00:24:11.157 ] 00:24:11.157 01:54:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.157 01:54:11 -- common/autotest_common.sh@893 -- # return 0 00:24:11.157 01:54:11 -- bdev/blockdev.sh@605 -- # sleep 2 00:24:11.157 01:54:11 -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:11.157 Running I/O for 10 seconds... 00:24:13.057 01:54:13 -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:24:13.057 01:54:13 -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:24:13.057 01:54:13 -- bdev/blockdev.sh@560 -- # local iostats 00:24:13.057 01:54:13 -- bdev/blockdev.sh@561 -- # local io_count1 00:24:13.057 01:54:13 -- bdev/blockdev.sh@562 -- # local io_count2 00:24:13.057 01:54:13 -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:24:13.057 01:54:13 -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:24:13.057 01:54:13 -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:24:13.057 01:54:13 -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:24:13.057 01:54:13 -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:24:13.057 01:54:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.057 01:54:13 -- common/autotest_common.sh@10 -- # set +x 00:24:13.057 01:54:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.057 01:54:13 -- bdev/blockdev.sh@568 -- # iostats='{ 00:24:13.057 "tick_rate": 2100000000, 00:24:13.057 "ticks": 1807257634682, 00:24:13.057 "bdevs": [ 00:24:13.057 { 00:24:13.057 "name": "Malloc_STAT", 00:24:13.057 "bytes_read": 893424128, 00:24:13.057 "num_read_ops": 218115, 00:24:13.057 "bytes_written": 0, 00:24:13.057 "num_write_ops": 0, 00:24:13.057 "bytes_unmapped": 0, 00:24:13.057 "num_unmap_ops": 0, 00:24:13.057 "bytes_copied": 0, 00:24:13.057 "num_copy_ops": 0, 00:24:13.057 "read_latency_ticks": 2054173687166, 00:24:13.057 "max_read_latency_ticks": 12318510, 00:24:13.057 "min_read_latency_ticks": 237602, 00:24:13.057 "write_latency_ticks": 0, 00:24:13.057 "max_write_latency_ticks": 0, 00:24:13.057 "min_write_latency_ticks": 0, 00:24:13.057 "unmap_latency_ticks": 0, 00:24:13.057 "max_unmap_latency_ticks": 0, 00:24:13.057 "min_unmap_latency_ticks": 0, 00:24:13.057 "copy_latency_ticks": 0, 00:24:13.057 "max_copy_latency_ticks": 0, 00:24:13.057 "min_copy_latency_ticks": 0, 00:24:13.057 "io_error": {} 00:24:13.057 } 00:24:13.057 ] 00:24:13.057 }' 00:24:13.057 01:54:13 -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:24:13.315 01:54:13 -- bdev/blockdev.sh@569 -- # io_count1=218115 00:24:13.315 01:54:13 -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:24:13.315 01:54:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.315 01:54:13 -- common/autotest_common.sh@10 -- # set +x 00:24:13.315 01:54:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.315 01:54:13 -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:24:13.315 "tick_rate": 2100000000, 00:24:13.315 "ticks": 1807385804084, 00:24:13.315 "name": "Malloc_STAT", 00:24:13.315 "channels": [ 00:24:13.315 { 00:24:13.315 "thread_id": 2, 00:24:13.315 "bytes_read": 460324864, 00:24:13.315 "num_read_ops": 112384, 00:24:13.315 "bytes_written": 0, 00:24:13.315 "num_write_ops": 0, 00:24:13.315 "bytes_unmapped": 0, 00:24:13.315 "num_unmap_ops": 0, 00:24:13.315 "bytes_copied": 0, 00:24:13.315 "num_copy_ops": 0, 00:24:13.315 "read_latency_ticks": 1059145483784, 00:24:13.315 "max_read_latency_ticks": 12552964, 00:24:13.315 "min_read_latency_ticks": 6988270, 00:24:13.315 "write_latency_ticks": 0, 00:24:13.315 "max_write_latency_ticks": 0, 00:24:13.315 "min_write_latency_ticks": 0, 00:24:13.315 "unmap_latency_ticks": 0, 00:24:13.315 "max_unmap_latency_ticks": 0, 00:24:13.315 "min_unmap_latency_ticks": 0, 00:24:13.315 "copy_latency_ticks": 0, 00:24:13.315 "max_copy_latency_ticks": 0, 00:24:13.315 "min_copy_latency_ticks": 0 00:24:13.315 }, 00:24:13.315 { 00:24:13.315 "thread_id": 3, 00:24:13.315 "bytes_read": 461373440, 00:24:13.315 "num_read_ops": 112640, 00:24:13.315 "bytes_written": 0, 00:24:13.315 "num_write_ops": 0, 00:24:13.315 "bytes_unmapped": 0, 00:24:13.315 "num_unmap_ops": 0, 00:24:13.315 "bytes_copied": 0, 00:24:13.315 "num_copy_ops": 0, 00:24:13.315 "read_latency_ticks": 1060019353664, 00:24:13.315 "max_read_latency_ticks": 10824520, 00:24:13.315 "min_read_latency_ticks": 6301524, 00:24:13.315 "write_latency_ticks": 0, 00:24:13.315 "max_write_latency_ticks": 0, 00:24:13.315 "min_write_latency_ticks": 0, 00:24:13.315 "unmap_latency_ticks": 0, 00:24:13.315 "max_unmap_latency_ticks": 0, 00:24:13.315 "min_unmap_latency_ticks": 0, 00:24:13.315 "copy_latency_ticks": 0, 00:24:13.315 "max_copy_latency_ticks": 0, 00:24:13.315 "min_copy_latency_ticks": 0 00:24:13.315 } 00:24:13.315 ] 00:24:13.315 }' 00:24:13.315 01:54:13 -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:24:13.315 01:54:13 -- bdev/blockdev.sh@572 -- # io_count_per_channel1=112384 00:24:13.315 01:54:13 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=112384 00:24:13.315 01:54:13 -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:24:13.315 01:54:13 -- bdev/blockdev.sh@574 -- # io_count_per_channel2=112640 00:24:13.315 01:54:13 -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=225024 00:24:13.315 01:54:13 -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:24:13.315 01:54:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.315 01:54:13 -- common/autotest_common.sh@10 -- # set +x 00:24:13.315 01:54:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.315 01:54:13 -- bdev/blockdev.sh@577 -- # iostats='{ 00:24:13.315 "tick_rate": 2100000000, 00:24:13.315 "ticks": 1807608476078, 00:24:13.315 "bdevs": [ 00:24:13.315 { 00:24:13.315 "name": "Malloc_STAT", 00:24:13.315 "bytes_read": 972067328, 00:24:13.316 "num_read_ops": 237315, 00:24:13.316 "bytes_written": 0, 00:24:13.316 "num_write_ops": 0, 00:24:13.316 "bytes_unmapped": 0, 00:24:13.316 "num_unmap_ops": 0, 00:24:13.316 "bytes_copied": 0, 00:24:13.316 "num_copy_ops": 0, 00:24:13.316 "read_latency_ticks": 2234161281504, 00:24:13.316 "max_read_latency_ticks": 13176246, 00:24:13.316 "min_read_latency_ticks": 237602, 00:24:13.316 "write_latency_ticks": 0, 00:24:13.316 "max_write_latency_ticks": 0, 00:24:13.316 "min_write_latency_ticks": 0, 00:24:13.316 "unmap_latency_ticks": 0, 00:24:13.316 "max_unmap_latency_ticks": 0, 00:24:13.316 "min_unmap_latency_ticks": 0, 00:24:13.316 "copy_latency_ticks": 0, 00:24:13.316 "max_copy_latency_ticks": 0, 00:24:13.316 "min_copy_latency_ticks": 0, 00:24:13.316 "io_error": {} 00:24:13.316 } 00:24:13.316 ] 00:24:13.316 }' 00:24:13.316 01:54:13 -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:24:13.316 01:54:13 -- bdev/blockdev.sh@578 -- # io_count2=237315 00:24:13.316 01:54:13 -- bdev/blockdev.sh@583 -- # '[' 225024 -lt 218115 ']' 00:24:13.316 01:54:13 -- bdev/blockdev.sh@583 -- # '[' 225024 -gt 237315 ']' 00:24:13.316 01:54:13 -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:24:13.316 01:54:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.316 01:54:13 -- common/autotest_common.sh@10 -- # set +x 00:24:13.316 00:24:13.316 Latency(us) 00:24:13.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.316 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:24:13.316 Malloc_STAT : 2.14 56829.30 221.99 0.00 0.00 4494.46 1029.85 6303.94 00:24:13.316 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:24:13.316 Malloc_STAT : 2.15 57285.30 223.77 0.00 0.00 4458.89 651.46 5180.46 00:24:13.316 =================================================================================================================== 00:24:13.316 Total : 114114.60 445.76 0.00 0.00 4476.60 651.46 6303.94 00:24:13.574 0 00:24:13.574 01:54:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.574 01:54:13 -- bdev/blockdev.sh@609 -- # killprocess 119753 00:24:13.574 01:54:13 -- common/autotest_common.sh@936 -- # '[' -z 119753 ']' 00:24:13.574 01:54:13 -- common/autotest_common.sh@940 -- # kill -0 119753 00:24:13.574 01:54:13 -- common/autotest_common.sh@941 -- # uname 00:24:13.574 01:54:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:13.574 01:54:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119753 00:24:13.574 01:54:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:13.574 01:54:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:13.574 killing process with pid 119753 00:24:13.574 01:54:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119753' 00:24:13.574 Received shutdown signal, test time was about 2.309646 seconds 00:24:13.574 00:24:13.574 Latency(us) 00:24:13.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.574 =================================================================================================================== 00:24:13.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.574 01:54:13 -- common/autotest_common.sh@955 -- # kill 119753 00:24:13.574 01:54:13 -- common/autotest_common.sh@960 -- # wait 119753 00:24:15.479 01:54:15 -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:24:15.479 00:24:15.479 real 0m5.112s 00:24:15.479 user 0m9.453s 00:24:15.479 sys 0m0.442s 00:24:15.479 01:54:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:15.479 01:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 ************************************ 00:24:15.479 END TEST bdev_stat 00:24:15.479 ************************************ 00:24:15.479 01:54:15 -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:24:15.479 01:54:15 -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:24:15.479 01:54:15 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:24:15.479 01:54:15 -- bdev/blockdev.sh@811 -- # cleanup 00:24:15.479 01:54:15 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:15.479 01:54:15 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:15.479 01:54:15 -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:24:15.479 01:54:15 -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:24:15.479 01:54:15 -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:24:15.479 01:54:15 -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:24:15.479 ************************************ 00:24:15.479 END TEST blockdev_general 00:24:15.479 ************************************ 00:24:15.479 00:24:15.479 real 2m36.461s 00:24:15.479 user 6m7.078s 00:24:15.479 sys 0m24.706s 00:24:15.479 01:54:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:15.479 01:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 01:54:15 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:24:15.479 01:54:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.479 01:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.479 01:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 ************************************ 00:24:15.479 START TEST bdev_raid 00:24:15.479 ************************************ 00:24:15.479 01:54:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:24:15.479 * Looking for test storage... 00:24:15.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:15.479 01:54:15 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:15.479 01:54:15 -- bdev/nbd_common.sh@6 -- # set -e 00:24:15.479 01:54:15 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@716 -- # uname -s 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:24:15.480 01:54:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:15.480 01:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.480 01:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.480 ************************************ 00:24:15.480 START TEST raid_function_test_raid0 00:24:15.480 ************************************ 00:24:15.480 01:54:15 -- common/autotest_common.sh@1111 -- # raid_function_test raid0 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@86 -- # raid_pid=119923 00:24:15.480 Process raid pid: 119923 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 119923' 00:24:15.480 01:54:15 -- bdev/bdev_raid.sh@88 -- # waitforlisten 119923 /var/tmp/spdk-raid.sock 00:24:15.480 01:54:15 -- common/autotest_common.sh@817 -- # '[' -z 119923 ']' 00:24:15.480 01:54:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:15.480 01:54:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:15.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:15.480 01:54:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:15.480 01:54:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:15.480 01:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.480 [2024-04-24 01:54:15.515528] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:15.480 [2024-04-24 01:54:15.515711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.739 [2024-04-24 01:54:15.694282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.998 [2024-04-24 01:54:15.900729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.257 [2024-04-24 01:54:16.136839] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:16.516 01:54:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.516 01:54:16 -- common/autotest_common.sh@850 -- # return 0 00:24:16.516 01:54:16 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:24:16.516 01:54:16 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:24:16.516 01:54:16 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:24:16.516 01:54:16 -- bdev/bdev_raid.sh@70 -- # cat 00:24:16.516 01:54:16 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:24:16.784 [2024-04-24 01:54:16.755194] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:16.784 [2024-04-24 01:54:16.757107] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:16.784 [2024-04-24 01:54:16.757172] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:16.784 [2024-04-24 01:54:16.757182] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:16.784 [2024-04-24 01:54:16.757333] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:24:16.784 [2024-04-24 01:54:16.757626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:16.784 [2024-04-24 01:54:16.757645] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:24:16.784 [2024-04-24 01:54:16.757777] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.784 Base_1 00:24:16.784 Base_2 00:24:16.784 01:54:16 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:24:16.784 01:54:16 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:24:16.784 01:54:16 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:24:17.042 01:54:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:24:17.042 01:54:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:24:17.042 01:54:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@12 -- # local i 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:17.042 01:54:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:24:17.300 [2024-04-24 01:54:17.211339] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:17.301 /dev/nbd0 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:17.301 01:54:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:17.301 01:54:17 -- common/autotest_common.sh@855 -- # local i 00:24:17.301 01:54:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:17.301 01:54:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:17.301 01:54:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:17.301 01:54:17 -- common/autotest_common.sh@859 -- # break 00:24:17.301 01:54:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:17.301 01:54:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:17.301 01:54:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:17.301 1+0 records in 00:24:17.301 1+0 records out 00:24:17.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263501 s, 15.5 MB/s 00:24:17.301 01:54:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.301 01:54:17 -- common/autotest_common.sh@872 -- # size=4096 00:24:17.301 01:54:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.301 01:54:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:17.301 01:54:17 -- common/autotest_common.sh@875 -- # return 0 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:17.301 01:54:17 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:17.301 01:54:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:17.559 { 00:24:17.559 "nbd_device": "/dev/nbd0", 00:24:17.559 "bdev_name": "raid" 00:24:17.559 } 00:24:17.559 ]' 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:17.559 { 00:24:17.559 "nbd_device": "/dev/nbd0", 00:24:17.559 "bdev_name": "raid" 00:24:17.559 } 00:24:17.559 ]' 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@65 -- # count=1 00:24:17.559 01:54:17 -- bdev/nbd_common.sh@66 -- # echo 1 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@98 -- # count=1 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@20 -- # local blksize 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:24:17.559 4096+0 records in 00:24:17.559 4096+0 records out 00:24:17.559 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0265765 s, 78.9 MB/s 00:24:17.559 01:54:17 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:24:17.817 4096+0 records in 00:24:17.817 4096+0 records out 00:24:17.817 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.256975 s, 8.2 MB/s 00:24:17.817 01:54:17 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:24:17.817 01:54:17 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:24:18.112 128+0 records in 00:24:18.112 128+0 records out 00:24:18.112 65536 bytes (66 kB, 64 KiB) copied, 0.00121156 s, 54.1 MB/s 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:24:18.112 2035+0 records in 00:24:18.112 2035+0 records out 00:24:18.112 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130243 s, 80.0 MB/s 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:24:18.112 456+0 records in 00:24:18.112 456+0 records out 00:24:18.112 233472 bytes (233 kB, 228 KiB) copied, 0.00169972 s, 137 MB/s 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:24:18.112 01:54:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@51 -- # local i 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:18.112 01:54:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:18.372 [2024-04-24 01:54:18.278355] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@41 -- # break 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@45 -- # return 0 00:24:18.372 01:54:18 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.372 01:54:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@65 -- # true 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@65 -- # count=0 00:24:18.630 01:54:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:18.630 01:54:18 -- bdev/bdev_raid.sh@106 -- # count=0 00:24:18.630 01:54:18 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:24:18.630 01:54:18 -- bdev/bdev_raid.sh@111 -- # killprocess 119923 00:24:18.630 01:54:18 -- common/autotest_common.sh@936 -- # '[' -z 119923 ']' 00:24:18.630 01:54:18 -- common/autotest_common.sh@940 -- # kill -0 119923 00:24:18.630 01:54:18 -- common/autotest_common.sh@941 -- # uname 00:24:18.630 01:54:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:18.630 01:54:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119923 00:24:18.630 01:54:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:18.630 killing process with pid 119923 00:24:18.630 01:54:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:18.631 01:54:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119923' 00:24:18.631 01:54:18 -- common/autotest_common.sh@955 -- # kill 119923 00:24:18.631 [2024-04-24 01:54:18.651835] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:18.631 01:54:18 -- common/autotest_common.sh@960 -- # wait 119923 00:24:18.631 [2024-04-24 01:54:18.651944] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.631 [2024-04-24 01:54:18.651999] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.631 [2024-04-24 01:54:18.652014] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:24:18.888 [2024-04-24 01:54:18.883474] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.264 01:54:20 -- bdev/bdev_raid.sh@113 -- # return 0 00:24:20.264 00:24:20.264 real 0m4.909s 00:24:20.264 user 0m6.058s 00:24:20.264 sys 0m1.097s 00:24:20.264 01:54:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:20.264 01:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.264 ************************************ 00:24:20.264 END TEST raid_function_test_raid0 00:24:20.264 ************************************ 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:24:20.523 01:54:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:20.523 01:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:20.523 01:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.523 ************************************ 00:24:20.523 START TEST raid_function_test_concat 00:24:20.523 ************************************ 00:24:20.523 01:54:20 -- common/autotest_common.sh@1111 -- # raid_function_test concat 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@86 -- # raid_pid=120089 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:20.523 Process raid pid: 120089 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 120089' 00:24:20.523 01:54:20 -- bdev/bdev_raid.sh@88 -- # waitforlisten 120089 /var/tmp/spdk-raid.sock 00:24:20.523 01:54:20 -- common/autotest_common.sh@817 -- # '[' -z 120089 ']' 00:24:20.523 01:54:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:20.523 01:54:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:20.523 01:54:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:20.523 01:54:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.523 01:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.523 [2024-04-24 01:54:20.507305] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:20.523 [2024-04-24 01:54:20.507447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.782 [2024-04-24 01:54:20.666259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.040 [2024-04-24 01:54:20.885045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.040 [2024-04-24 01:54:21.122079] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.606 01:54:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:21.606 01:54:21 -- common/autotest_common.sh@850 -- # return 0 00:24:21.606 01:54:21 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:24:21.606 01:54:21 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:24:21.606 01:54:21 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:24:21.606 01:54:21 -- bdev/bdev_raid.sh@70 -- # cat 00:24:21.606 01:54:21 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:24:21.865 [2024-04-24 01:54:21.815525] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:21.865 [2024-04-24 01:54:21.817607] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:21.865 [2024-04-24 01:54:21.817681] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:21.865 [2024-04-24 01:54:21.817692] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:21.865 [2024-04-24 01:54:21.817842] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:24:21.865 [2024-04-24 01:54:21.818202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:21.865 [2024-04-24 01:54:21.818223] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:24:21.865 [2024-04-24 01:54:21.818384] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.865 Base_1 00:24:21.865 Base_2 00:24:21.865 01:54:21 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:24:21.865 01:54:21 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:24:21.865 01:54:21 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:24:22.124 01:54:22 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:24:22.124 01:54:22 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:24:22.124 01:54:22 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@12 -- # local i 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:22.124 01:54:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:24:22.383 [2024-04-24 01:54:22.351767] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:22.383 /dev/nbd0 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:22.383 01:54:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:22.383 01:54:22 -- common/autotest_common.sh@855 -- # local i 00:24:22.383 01:54:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:22.383 01:54:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:22.383 01:54:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:22.383 01:54:22 -- common/autotest_common.sh@859 -- # break 00:24:22.383 01:54:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:22.383 01:54:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:22.383 01:54:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:22.383 1+0 records in 00:24:22.383 1+0 records out 00:24:22.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177025 s, 23.1 MB/s 00:24:22.383 01:54:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:22.383 01:54:22 -- common/autotest_common.sh@872 -- # size=4096 00:24:22.383 01:54:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:22.383 01:54:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:22.383 01:54:22 -- common/autotest_common.sh@875 -- # return 0 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:22.383 01:54:22 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:22.383 01:54:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:22.641 { 00:24:22.641 "nbd_device": "/dev/nbd0", 00:24:22.641 "bdev_name": "raid" 00:24:22.641 } 00:24:22.641 ]' 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:22.641 { 00:24:22.641 "nbd_device": "/dev/nbd0", 00:24:22.641 "bdev_name": "raid" 00:24:22.641 } 00:24:22.641 ]' 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@65 -- # count=1 00:24:22.641 01:54:22 -- bdev/nbd_common.sh@66 -- # echo 1 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@98 -- # count=1 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@20 -- # local blksize 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:24:22.641 01:54:22 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:24:22.642 4096+0 records in 00:24:22.642 4096+0 records out 00:24:22.642 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0279176 s, 75.1 MB/s 00:24:22.642 01:54:22 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:24:23.208 4096+0 records in 00:24:23.208 4096+0 records out 00:24:23.208 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.273638 s, 7.7 MB/s 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:24:23.208 128+0 records in 00:24:23.208 128+0 records out 00:24:23.208 65536 bytes (66 kB, 64 KiB) copied, 0.000493464 s, 133 MB/s 00:24:23.208 01:54:22 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:24:23.208 2035+0 records in 00:24:23.208 2035+0 records out 00:24:23.208 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00765659 s, 136 MB/s 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:24:23.208 456+0 records in 00:24:23.208 456+0 records out 00:24:23.208 233472 bytes (233 kB, 228 KiB) copied, 0.00307535 s, 75.9 MB/s 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:24:23.208 01:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:24:23.209 01:54:23 -- bdev/bdev_raid.sh@53 -- # return 0 00:24:23.209 01:54:23 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@51 -- # local i 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.209 01:54:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.467 [2024-04-24 01:54:23.324358] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@41 -- # break 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.467 01:54:23 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:23.467 01:54:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@65 -- # true 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@65 -- # count=0 00:24:23.725 01:54:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:23.725 01:54:23 -- bdev/bdev_raid.sh@106 -- # count=0 00:24:23.725 01:54:23 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:24:23.725 01:54:23 -- bdev/bdev_raid.sh@111 -- # killprocess 120089 00:24:23.725 01:54:23 -- common/autotest_common.sh@936 -- # '[' -z 120089 ']' 00:24:23.725 01:54:23 -- common/autotest_common.sh@940 -- # kill -0 120089 00:24:23.725 01:54:23 -- common/autotest_common.sh@941 -- # uname 00:24:23.725 01:54:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.725 01:54:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120089 00:24:23.725 01:54:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.725 killing process with pid 120089 00:24:23.725 01:54:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.725 01:54:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120089' 00:24:23.725 01:54:23 -- common/autotest_common.sh@955 -- # kill 120089 00:24:23.725 01:54:23 -- common/autotest_common.sh@960 -- # wait 120089 00:24:23.725 [2024-04-24 01:54:23.704773] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.725 [2024-04-24 01:54:23.704901] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.725 [2024-04-24 01:54:23.704976] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.725 [2024-04-24 01:54:23.705001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:24:23.983 [2024-04-24 01:54:23.954784] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.359 01:54:25 -- bdev/bdev_raid.sh@113 -- # return 0 00:24:25.359 00:24:25.359 real 0m4.975s 00:24:25.359 user 0m6.126s 00:24:25.359 sys 0m1.114s 00:24:25.359 01:54:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:25.359 01:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.359 ************************************ 00:24:25.359 END TEST raid_function_test_concat 00:24:25.359 ************************************ 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:24:25.616 01:54:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:25.616 01:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.616 01:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.616 ************************************ 00:24:25.616 START TEST raid0_resize_test 00:24:25.616 ************************************ 00:24:25.616 01:54:25 -- common/autotest_common.sh@1111 -- # raid0_resize_test 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@301 -- # raid_pid=120256 00:24:25.616 01:54:25 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 120256' 00:24:25.617 Process raid pid: 120256 00:24:25.617 01:54:25 -- bdev/bdev_raid.sh@303 -- # waitforlisten 120256 /var/tmp/spdk-raid.sock 00:24:25.617 01:54:25 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:25.617 01:54:25 -- common/autotest_common.sh@817 -- # '[' -z 120256 ']' 00:24:25.617 01:54:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.617 01:54:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:25.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.617 01:54:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.617 01:54:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:25.617 01:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.617 [2024-04-24 01:54:25.601963] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:25.617 [2024-04-24 01:54:25.602182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.875 [2024-04-24 01:54:25.780173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.133 [2024-04-24 01:54:26.070507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.397 [2024-04-24 01:54:26.346516] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.656 01:54:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:26.656 01:54:26 -- common/autotest_common.sh@850 -- # return 0 00:24:26.656 01:54:26 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:24:26.914 Base_1 00:24:26.914 01:54:26 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:24:27.171 Base_2 00:24:27.171 01:54:27 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:24:27.430 [2024-04-24 01:54:27.298847] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:24:27.430 [2024-04-24 01:54:27.301102] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:24:27.430 [2024-04-24 01:54:27.301177] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:27.430 [2024-04-24 01:54:27.301204] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:27.430 [2024-04-24 01:54:27.301369] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:24:27.430 [2024-04-24 01:54:27.301710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:27.430 [2024-04-24 01:54:27.301730] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000010e00 00:24:27.430 [2024-04-24 01:54:27.301899] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.430 01:54:27 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:24:27.687 [2024-04-24 01:54:27.590897] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:27.687 [2024-04-24 01:54:27.590942] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:24:27.687 true 00:24:27.688 01:54:27 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:24:27.688 01:54:27 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:24:27.945 [2024-04-24 01:54:27.791071] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.945 01:54:27 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:24:27.945 01:54:27 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:24:27.945 01:54:27 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:24:27.945 01:54:27 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:24:28.203 [2024-04-24 01:54:28.046955] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:24:28.203 [2024-04-24 01:54:28.046997] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:24:28.203 [2024-04-24 01:54:28.047045] bdev_raid.c:2249:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:24:28.203 true 00:24:28.203 01:54:28 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:24:28.203 01:54:28 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:24:28.461 [2024-04-24 01:54:28.331115] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.461 01:54:28 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:24:28.461 01:54:28 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:24:28.461 01:54:28 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:24:28.461 01:54:28 -- bdev/bdev_raid.sh@332 -- # killprocess 120256 00:24:28.461 01:54:28 -- common/autotest_common.sh@936 -- # '[' -z 120256 ']' 00:24:28.461 01:54:28 -- common/autotest_common.sh@940 -- # kill -0 120256 00:24:28.461 01:54:28 -- common/autotest_common.sh@941 -- # uname 00:24:28.461 01:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.461 01:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120256 00:24:28.461 01:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.461 01:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.461 killing process with pid 120256 00:24:28.461 01:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120256' 00:24:28.461 01:54:28 -- common/autotest_common.sh@955 -- # kill 120256 00:24:28.461 [2024-04-24 01:54:28.379416] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.461 01:54:28 -- common/autotest_common.sh@960 -- # wait 120256 00:24:28.461 [2024-04-24 01:54:28.379521] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.461 [2024-04-24 01:54:28.379575] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.461 [2024-04-24 01:54:28.379585] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Raid, state offline 00:24:28.461 [2024-04-24 01:54:28.380208] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@334 -- # return 0 00:24:29.838 00:24:29.838 real 0m4.208s 00:24:29.838 user 0m5.808s 00:24:29.838 sys 0m0.699s 00:24:29.838 01:54:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:29.838 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.838 ************************************ 00:24:29.838 END TEST raid0_resize_test 00:24:29.838 ************************************ 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:24:29.838 01:54:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:29.838 01:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:29.838 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.838 ************************************ 00:24:29.838 START TEST raid_state_function_test 00:24:29.838 ************************************ 00:24:29.838 01:54:29 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 false 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=120356 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120356' 00:24:29.838 Process raid pid: 120356 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:29.838 01:54:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120356 /var/tmp/spdk-raid.sock 00:24:29.838 01:54:29 -- common/autotest_common.sh@817 -- # '[' -z 120356 ']' 00:24:29.838 01:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:29.838 01:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:29.838 01:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:29.838 01:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:29.838 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.838 [2024-04-24 01:54:29.917752] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:29.838 [2024-04-24 01:54:29.917967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.096 [2024-04-24 01:54:30.101076] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.354 [2024-04-24 01:54:30.384601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.612 [2024-04-24 01:54:30.617265] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.870 01:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.870 01:54:30 -- common/autotest_common.sh@850 -- # return 0 00:24:30.870 01:54:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:31.129 [2024-04-24 01:54:31.036930] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.129 [2024-04-24 01:54:31.037007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.129 [2024-04-24 01:54:31.037017] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.129 [2024-04-24 01:54:31.037053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.129 01:54:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.388 01:54:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.388 "name": "Existed_Raid", 00:24:31.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.388 "strip_size_kb": 64, 00:24:31.388 "state": "configuring", 00:24:31.388 "raid_level": "raid0", 00:24:31.388 "superblock": false, 00:24:31.388 "num_base_bdevs": 2, 00:24:31.388 "num_base_bdevs_discovered": 0, 00:24:31.388 "num_base_bdevs_operational": 2, 00:24:31.388 "base_bdevs_list": [ 00:24:31.388 { 00:24:31.388 "name": "BaseBdev1", 00:24:31.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.388 "is_configured": false, 00:24:31.388 "data_offset": 0, 00:24:31.388 "data_size": 0 00:24:31.388 }, 00:24:31.388 { 00:24:31.388 "name": "BaseBdev2", 00:24:31.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.388 "is_configured": false, 00:24:31.388 "data_offset": 0, 00:24:31.388 "data_size": 0 00:24:31.388 } 00:24:31.388 ] 00:24:31.388 }' 00:24:31.388 01:54:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.388 01:54:31 -- common/autotest_common.sh@10 -- # set +x 00:24:31.957 01:54:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.957 [2024-04-24 01:54:31.957013] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.957 [2024-04-24 01:54:31.957073] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:24:31.957 01:54:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:32.216 [2024-04-24 01:54:32.265065] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:32.216 [2024-04-24 01:54:32.265168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:32.216 [2024-04-24 01:54:32.265180] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:32.216 [2024-04-24 01:54:32.265211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:32.216 01:54:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:32.476 [2024-04-24 01:54:32.493727] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.476 BaseBdev1 00:24:32.476 01:54:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:32.476 01:54:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:32.476 01:54:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:32.476 01:54:32 -- common/autotest_common.sh@887 -- # local i 00:24:32.476 01:54:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:32.476 01:54:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:32.476 01:54:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.745 01:54:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:33.003 [ 00:24:33.003 { 00:24:33.003 "name": "BaseBdev1", 00:24:33.003 "aliases": [ 00:24:33.003 "31df1819-5d8b-4f9d-a2ef-dfe8a9d9d42e" 00:24:33.003 ], 00:24:33.003 "product_name": "Malloc disk", 00:24:33.003 "block_size": 512, 00:24:33.003 "num_blocks": 65536, 00:24:33.003 "uuid": "31df1819-5d8b-4f9d-a2ef-dfe8a9d9d42e", 00:24:33.003 "assigned_rate_limits": { 00:24:33.003 "rw_ios_per_sec": 0, 00:24:33.003 "rw_mbytes_per_sec": 0, 00:24:33.003 "r_mbytes_per_sec": 0, 00:24:33.003 "w_mbytes_per_sec": 0 00:24:33.003 }, 00:24:33.003 "claimed": true, 00:24:33.003 "claim_type": "exclusive_write", 00:24:33.003 "zoned": false, 00:24:33.003 "supported_io_types": { 00:24:33.003 "read": true, 00:24:33.003 "write": true, 00:24:33.003 "unmap": true, 00:24:33.003 "write_zeroes": true, 00:24:33.003 "flush": true, 00:24:33.003 "reset": true, 00:24:33.003 "compare": false, 00:24:33.003 "compare_and_write": false, 00:24:33.003 "abort": true, 00:24:33.003 "nvme_admin": false, 00:24:33.003 "nvme_io": false 00:24:33.003 }, 00:24:33.003 "memory_domains": [ 00:24:33.003 { 00:24:33.003 "dma_device_id": "system", 00:24:33.003 "dma_device_type": 1 00:24:33.003 }, 00:24:33.003 { 00:24:33.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.003 "dma_device_type": 2 00:24:33.003 } 00:24:33.003 ], 00:24:33.003 "driver_specific": {} 00:24:33.003 } 00:24:33.003 ] 00:24:33.003 01:54:32 -- common/autotest_common.sh@893 -- # return 0 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.003 01:54:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.341 01:54:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.341 "name": "Existed_Raid", 00:24:33.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.341 "strip_size_kb": 64, 00:24:33.341 "state": "configuring", 00:24:33.341 "raid_level": "raid0", 00:24:33.341 "superblock": false, 00:24:33.341 "num_base_bdevs": 2, 00:24:33.341 "num_base_bdevs_discovered": 1, 00:24:33.341 "num_base_bdevs_operational": 2, 00:24:33.341 "base_bdevs_list": [ 00:24:33.341 { 00:24:33.341 "name": "BaseBdev1", 00:24:33.341 "uuid": "31df1819-5d8b-4f9d-a2ef-dfe8a9d9d42e", 00:24:33.341 "is_configured": true, 00:24:33.341 "data_offset": 0, 00:24:33.341 "data_size": 65536 00:24:33.341 }, 00:24:33.341 { 00:24:33.341 "name": "BaseBdev2", 00:24:33.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.341 "is_configured": false, 00:24:33.341 "data_offset": 0, 00:24:33.341 "data_size": 0 00:24:33.341 } 00:24:33.341 ] 00:24:33.341 }' 00:24:33.341 01:54:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.341 01:54:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.913 01:54:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:33.913 [2024-04-24 01:54:33.958078] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:33.913 [2024-04-24 01:54:33.958169] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:24:33.913 01:54:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:33.913 01:54:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:34.171 [2024-04-24 01:54:34.158163] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:34.171 [2024-04-24 01:54:34.160503] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:34.171 [2024-04-24 01:54:34.160581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.171 01:54:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.430 01:54:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.430 "name": "Existed_Raid", 00:24:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.430 "strip_size_kb": 64, 00:24:34.430 "state": "configuring", 00:24:34.430 "raid_level": "raid0", 00:24:34.430 "superblock": false, 00:24:34.430 "num_base_bdevs": 2, 00:24:34.430 "num_base_bdevs_discovered": 1, 00:24:34.430 "num_base_bdevs_operational": 2, 00:24:34.430 "base_bdevs_list": [ 00:24:34.430 { 00:24:34.430 "name": "BaseBdev1", 00:24:34.430 "uuid": "31df1819-5d8b-4f9d-a2ef-dfe8a9d9d42e", 00:24:34.430 "is_configured": true, 00:24:34.430 "data_offset": 0, 00:24:34.430 "data_size": 65536 00:24:34.430 }, 00:24:34.430 { 00:24:34.430 "name": "BaseBdev2", 00:24:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.430 "is_configured": false, 00:24:34.430 "data_offset": 0, 00:24:34.430 "data_size": 0 00:24:34.430 } 00:24:34.430 ] 00:24:34.430 }' 00:24:34.430 01:54:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.430 01:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.995 01:54:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:35.253 [2024-04-24 01:54:35.210020] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.253 [2024-04-24 01:54:35.210074] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:35.253 [2024-04-24 01:54:35.210082] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:35.253 [2024-04-24 01:54:35.210230] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:24:35.253 [2024-04-24 01:54:35.210517] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:35.253 [2024-04-24 01:54:35.210527] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:24:35.253 [2024-04-24 01:54:35.210774] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.253 BaseBdev2 00:24:35.253 01:54:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:35.253 01:54:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:35.253 01:54:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:35.253 01:54:35 -- common/autotest_common.sh@887 -- # local i 00:24:35.253 01:54:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:35.253 01:54:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:35.253 01:54:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.513 01:54:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:35.513 [ 00:24:35.513 { 00:24:35.513 "name": "BaseBdev2", 00:24:35.513 "aliases": [ 00:24:35.513 "34f3226d-e62b-454c-8b28-cbeb87180401" 00:24:35.513 ], 00:24:35.513 "product_name": "Malloc disk", 00:24:35.513 "block_size": 512, 00:24:35.513 "num_blocks": 65536, 00:24:35.513 "uuid": "34f3226d-e62b-454c-8b28-cbeb87180401", 00:24:35.513 "assigned_rate_limits": { 00:24:35.513 "rw_ios_per_sec": 0, 00:24:35.513 "rw_mbytes_per_sec": 0, 00:24:35.513 "r_mbytes_per_sec": 0, 00:24:35.513 "w_mbytes_per_sec": 0 00:24:35.513 }, 00:24:35.513 "claimed": true, 00:24:35.513 "claim_type": "exclusive_write", 00:24:35.513 "zoned": false, 00:24:35.513 "supported_io_types": { 00:24:35.513 "read": true, 00:24:35.513 "write": true, 00:24:35.513 "unmap": true, 00:24:35.513 "write_zeroes": true, 00:24:35.513 "flush": true, 00:24:35.513 "reset": true, 00:24:35.513 "compare": false, 00:24:35.513 "compare_and_write": false, 00:24:35.513 "abort": true, 00:24:35.513 "nvme_admin": false, 00:24:35.513 "nvme_io": false 00:24:35.513 }, 00:24:35.513 "memory_domains": [ 00:24:35.513 { 00:24:35.513 "dma_device_id": "system", 00:24:35.513 "dma_device_type": 1 00:24:35.513 }, 00:24:35.513 { 00:24:35.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.513 "dma_device_type": 2 00:24:35.513 } 00:24:35.513 ], 00:24:35.513 "driver_specific": {} 00:24:35.513 } 00:24:35.513 ] 00:24:35.513 01:54:35 -- common/autotest_common.sh@893 -- # return 0 00:24:35.513 01:54:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:35.513 01:54:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.772 "name": "Existed_Raid", 00:24:35.772 "uuid": "47f302cc-583d-4614-86ad-65d45b406eb8", 00:24:35.772 "strip_size_kb": 64, 00:24:35.772 "state": "online", 00:24:35.772 "raid_level": "raid0", 00:24:35.772 "superblock": false, 00:24:35.772 "num_base_bdevs": 2, 00:24:35.772 "num_base_bdevs_discovered": 2, 00:24:35.772 "num_base_bdevs_operational": 2, 00:24:35.772 "base_bdevs_list": [ 00:24:35.772 { 00:24:35.772 "name": "BaseBdev1", 00:24:35.772 "uuid": "31df1819-5d8b-4f9d-a2ef-dfe8a9d9d42e", 00:24:35.772 "is_configured": true, 00:24:35.772 "data_offset": 0, 00:24:35.772 "data_size": 65536 00:24:35.772 }, 00:24:35.772 { 00:24:35.772 "name": "BaseBdev2", 00:24:35.772 "uuid": "34f3226d-e62b-454c-8b28-cbeb87180401", 00:24:35.772 "is_configured": true, 00:24:35.772 "data_offset": 0, 00:24:35.772 "data_size": 65536 00:24:35.772 } 00:24:35.772 ] 00:24:35.772 }' 00:24:35.772 01:54:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.772 01:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:36.339 01:54:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:36.597 [2024-04-24 01:54:36.546471] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:36.597 [2024-04-24 01:54:36.546508] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.597 [2024-04-24 01:54:36.546561] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.597 01:54:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.598 01:54:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.855 01:54:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.855 "name": "Existed_Raid", 00:24:36.855 "uuid": "47f302cc-583d-4614-86ad-65d45b406eb8", 00:24:36.855 "strip_size_kb": 64, 00:24:36.855 "state": "offline", 00:24:36.855 "raid_level": "raid0", 00:24:36.855 "superblock": false, 00:24:36.855 "num_base_bdevs": 2, 00:24:36.855 "num_base_bdevs_discovered": 1, 00:24:36.855 "num_base_bdevs_operational": 1, 00:24:36.855 "base_bdevs_list": [ 00:24:36.855 { 00:24:36.855 "name": null, 00:24:36.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.855 "is_configured": false, 00:24:36.855 "data_offset": 0, 00:24:36.855 "data_size": 65536 00:24:36.855 }, 00:24:36.855 { 00:24:36.855 "name": "BaseBdev2", 00:24:36.855 "uuid": "34f3226d-e62b-454c-8b28-cbeb87180401", 00:24:36.856 "is_configured": true, 00:24:36.856 "data_offset": 0, 00:24:36.856 "data_size": 65536 00:24:36.856 } 00:24:36.856 ] 00:24:36.856 }' 00:24:36.856 01:54:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.856 01:54:36 -- common/autotest_common.sh@10 -- # set +x 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:37.790 01:54:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:38.049 [2024-04-24 01:54:38.089650] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:38.049 [2024-04-24 01:54:38.089717] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:24:38.307 01:54:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:38.307 01:54:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:38.307 01:54:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:38.307 01:54:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.307 01:54:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:38.308 01:54:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:38.308 01:54:38 -- bdev/bdev_raid.sh@287 -- # killprocess 120356 00:24:38.308 01:54:38 -- common/autotest_common.sh@936 -- # '[' -z 120356 ']' 00:24:38.308 01:54:38 -- common/autotest_common.sh@940 -- # kill -0 120356 00:24:38.567 01:54:38 -- common/autotest_common.sh@941 -- # uname 00:24:38.567 01:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.567 01:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120356 00:24:38.567 killing process with pid 120356 00:24:38.567 01:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.567 01:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.567 01:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120356' 00:24:38.567 01:54:38 -- common/autotest_common.sh@955 -- # kill 120356 00:24:38.567 01:54:38 -- common/autotest_common.sh@960 -- # wait 120356 00:24:38.567 [2024-04-24 01:54:38.418659] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.567 [2024-04-24 01:54:38.418791] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.946 ************************************ 00:24:39.946 END TEST raid_state_function_test 00:24:39.946 ************************************ 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:39.946 00:24:39.946 real 0m9.917s 00:24:39.946 user 0m16.353s 00:24:39.946 sys 0m1.593s 00:24:39.946 01:54:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:39.946 01:54:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:24:39.946 01:54:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:39.946 01:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:39.946 01:54:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.946 ************************************ 00:24:39.946 START TEST raid_state_function_test_sb 00:24:39.946 ************************************ 00:24:39.946 01:54:39 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 true 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=120681 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120681' 00:24:39.946 Process raid pid: 120681 00:24:39.946 01:54:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:39.947 01:54:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120681 /var/tmp/spdk-raid.sock 00:24:39.947 01:54:39 -- common/autotest_common.sh@817 -- # '[' -z 120681 ']' 00:24:39.947 01:54:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:39.947 01:54:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:39.947 01:54:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:39.947 01:54:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.947 01:54:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.947 [2024-04-24 01:54:39.935809] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:39.947 [2024-04-24 01:54:39.935991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.205 [2024-04-24 01:54:40.118002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.463 [2024-04-24 01:54:40.327873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.722 [2024-04-24 01:54:40.555473] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:40.982 01:54:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.982 01:54:40 -- common/autotest_common.sh@850 -- # return 0 00:24:40.982 01:54:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:41.241 [2024-04-24 01:54:41.067878] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:41.241 [2024-04-24 01:54:41.067959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:41.241 [2024-04-24 01:54:41.067970] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:41.241 [2024-04-24 01:54:41.067988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.241 "name": "Existed_Raid", 00:24:41.241 "uuid": "defb4dda-4463-404d-9f13-68572d3da203", 00:24:41.241 "strip_size_kb": 64, 00:24:41.241 "state": "configuring", 00:24:41.241 "raid_level": "raid0", 00:24:41.241 "superblock": true, 00:24:41.241 "num_base_bdevs": 2, 00:24:41.241 "num_base_bdevs_discovered": 0, 00:24:41.241 "num_base_bdevs_operational": 2, 00:24:41.241 "base_bdevs_list": [ 00:24:41.241 { 00:24:41.241 "name": "BaseBdev1", 00:24:41.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.241 "is_configured": false, 00:24:41.241 "data_offset": 0, 00:24:41.241 "data_size": 0 00:24:41.241 }, 00:24:41.241 { 00:24:41.241 "name": "BaseBdev2", 00:24:41.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.241 "is_configured": false, 00:24:41.241 "data_offset": 0, 00:24:41.241 "data_size": 0 00:24:41.241 } 00:24:41.241 ] 00:24:41.241 }' 00:24:41.241 01:54:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.241 01:54:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.809 01:54:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:42.068 [2024-04-24 01:54:42.023918] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:42.068 [2024-04-24 01:54:42.023969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:24:42.068 01:54:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:42.327 [2024-04-24 01:54:42.207981] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:42.327 [2024-04-24 01:54:42.208079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:42.327 [2024-04-24 01:54:42.208090] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:42.327 [2024-04-24 01:54:42.208127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:42.327 01:54:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:42.586 [2024-04-24 01:54:42.506599] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:42.586 BaseBdev1 00:24:42.586 01:54:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:42.586 01:54:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:42.586 01:54:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:42.586 01:54:42 -- common/autotest_common.sh@887 -- # local i 00:24:42.586 01:54:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:42.586 01:54:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:42.586 01:54:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.858 01:54:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:42.858 [ 00:24:42.858 { 00:24:42.858 "name": "BaseBdev1", 00:24:42.858 "aliases": [ 00:24:42.858 "824de206-84eb-4150-a7e1-999051bb1301" 00:24:42.858 ], 00:24:42.858 "product_name": "Malloc disk", 00:24:42.858 "block_size": 512, 00:24:42.858 "num_blocks": 65536, 00:24:42.858 "uuid": "824de206-84eb-4150-a7e1-999051bb1301", 00:24:42.858 "assigned_rate_limits": { 00:24:42.858 "rw_ios_per_sec": 0, 00:24:42.858 "rw_mbytes_per_sec": 0, 00:24:42.858 "r_mbytes_per_sec": 0, 00:24:42.858 "w_mbytes_per_sec": 0 00:24:42.858 }, 00:24:42.858 "claimed": true, 00:24:42.858 "claim_type": "exclusive_write", 00:24:42.858 "zoned": false, 00:24:42.858 "supported_io_types": { 00:24:42.858 "read": true, 00:24:42.858 "write": true, 00:24:42.858 "unmap": true, 00:24:42.858 "write_zeroes": true, 00:24:42.858 "flush": true, 00:24:42.858 "reset": true, 00:24:42.858 "compare": false, 00:24:42.858 "compare_and_write": false, 00:24:42.858 "abort": true, 00:24:42.858 "nvme_admin": false, 00:24:42.858 "nvme_io": false 00:24:42.858 }, 00:24:42.858 "memory_domains": [ 00:24:42.858 { 00:24:42.858 "dma_device_id": "system", 00:24:42.858 "dma_device_type": 1 00:24:42.858 }, 00:24:42.858 { 00:24:42.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.858 "dma_device_type": 2 00:24:42.858 } 00:24:42.858 ], 00:24:42.858 "driver_specific": {} 00:24:42.858 } 00:24:42.858 ] 00:24:42.858 01:54:42 -- common/autotest_common.sh@893 -- # return 0 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.858 01:54:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.150 01:54:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.150 "name": "Existed_Raid", 00:24:43.150 "uuid": "358cfd9a-d9e1-4870-90f7-ef3ebe2bdcee", 00:24:43.150 "strip_size_kb": 64, 00:24:43.150 "state": "configuring", 00:24:43.150 "raid_level": "raid0", 00:24:43.150 "superblock": true, 00:24:43.150 "num_base_bdevs": 2, 00:24:43.150 "num_base_bdevs_discovered": 1, 00:24:43.150 "num_base_bdevs_operational": 2, 00:24:43.150 "base_bdevs_list": [ 00:24:43.150 { 00:24:43.150 "name": "BaseBdev1", 00:24:43.150 "uuid": "824de206-84eb-4150-a7e1-999051bb1301", 00:24:43.150 "is_configured": true, 00:24:43.150 "data_offset": 2048, 00:24:43.150 "data_size": 63488 00:24:43.150 }, 00:24:43.150 { 00:24:43.150 "name": "BaseBdev2", 00:24:43.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.150 "is_configured": false, 00:24:43.150 "data_offset": 0, 00:24:43.150 "data_size": 0 00:24:43.150 } 00:24:43.150 ] 00:24:43.150 }' 00:24:43.150 01:54:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.150 01:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:43.717 01:54:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:43.976 [2024-04-24 01:54:43.990963] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:43.976 [2024-04-24 01:54:43.991025] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:24:43.976 01:54:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:43.976 01:54:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:44.542 01:54:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:44.800 BaseBdev1 00:24:44.800 01:54:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:44.800 01:54:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:44.800 01:54:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:44.800 01:54:44 -- common/autotest_common.sh@887 -- # local i 00:24:44.800 01:54:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:44.800 01:54:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:44.800 01:54:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.800 01:54:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:45.058 [ 00:24:45.058 { 00:24:45.058 "name": "BaseBdev1", 00:24:45.058 "aliases": [ 00:24:45.058 "049c7c48-d288-42f5-9a7d-cc07a83fcf00" 00:24:45.058 ], 00:24:45.058 "product_name": "Malloc disk", 00:24:45.058 "block_size": 512, 00:24:45.058 "num_blocks": 65536, 00:24:45.058 "uuid": "049c7c48-d288-42f5-9a7d-cc07a83fcf00", 00:24:45.059 "assigned_rate_limits": { 00:24:45.059 "rw_ios_per_sec": 0, 00:24:45.059 "rw_mbytes_per_sec": 0, 00:24:45.059 "r_mbytes_per_sec": 0, 00:24:45.059 "w_mbytes_per_sec": 0 00:24:45.059 }, 00:24:45.059 "claimed": false, 00:24:45.059 "zoned": false, 00:24:45.059 "supported_io_types": { 00:24:45.059 "read": true, 00:24:45.059 "write": true, 00:24:45.059 "unmap": true, 00:24:45.059 "write_zeroes": true, 00:24:45.059 "flush": true, 00:24:45.059 "reset": true, 00:24:45.059 "compare": false, 00:24:45.059 "compare_and_write": false, 00:24:45.059 "abort": true, 00:24:45.059 "nvme_admin": false, 00:24:45.059 "nvme_io": false 00:24:45.059 }, 00:24:45.059 "memory_domains": [ 00:24:45.059 { 00:24:45.059 "dma_device_id": "system", 00:24:45.059 "dma_device_type": 1 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.059 "dma_device_type": 2 00:24:45.059 } 00:24:45.059 ], 00:24:45.059 "driver_specific": {} 00:24:45.059 } 00:24:45.059 ] 00:24:45.059 01:54:45 -- common/autotest_common.sh@893 -- # return 0 00:24:45.059 01:54:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:45.626 [2024-04-24 01:54:45.424977] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.626 [2024-04-24 01:54:45.426902] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:45.626 [2024-04-24 01:54:45.426958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:45.626 01:54:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.627 01:54:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.886 01:54:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.886 "name": "Existed_Raid", 00:24:45.886 "uuid": "2be93277-93fa-4ffc-b7bb-ebdb8056a293", 00:24:45.886 "strip_size_kb": 64, 00:24:45.886 "state": "configuring", 00:24:45.886 "raid_level": "raid0", 00:24:45.886 "superblock": true, 00:24:45.886 "num_base_bdevs": 2, 00:24:45.886 "num_base_bdevs_discovered": 1, 00:24:45.886 "num_base_bdevs_operational": 2, 00:24:45.886 "base_bdevs_list": [ 00:24:45.886 { 00:24:45.886 "name": "BaseBdev1", 00:24:45.886 "uuid": "049c7c48-d288-42f5-9a7d-cc07a83fcf00", 00:24:45.886 "is_configured": true, 00:24:45.886 "data_offset": 2048, 00:24:45.886 "data_size": 63488 00:24:45.886 }, 00:24:45.886 { 00:24:45.886 "name": "BaseBdev2", 00:24:45.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.886 "is_configured": false, 00:24:45.886 "data_offset": 0, 00:24:45.886 "data_size": 0 00:24:45.886 } 00:24:45.886 ] 00:24:45.886 }' 00:24:45.886 01:54:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.886 01:54:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.453 01:54:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:46.711 [2024-04-24 01:54:46.677873] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:46.711 [2024-04-24 01:54:46.678095] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:46.711 [2024-04-24 01:54:46.678108] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:46.711 [2024-04-24 01:54:46.678260] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:46.711 [2024-04-24 01:54:46.678577] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:46.711 [2024-04-24 01:54:46.678595] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:24:46.711 [2024-04-24 01:54:46.678740] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.711 BaseBdev2 00:24:46.711 01:54:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:46.711 01:54:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:46.711 01:54:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:46.711 01:54:46 -- common/autotest_common.sh@887 -- # local i 00:24:46.711 01:54:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:46.711 01:54:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:46.711 01:54:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:46.969 01:54:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:47.228 [ 00:24:47.228 { 00:24:47.228 "name": "BaseBdev2", 00:24:47.228 "aliases": [ 00:24:47.228 "5092c560-0f41-403a-a95c-f26b5e618b12" 00:24:47.228 ], 00:24:47.228 "product_name": "Malloc disk", 00:24:47.228 "block_size": 512, 00:24:47.228 "num_blocks": 65536, 00:24:47.228 "uuid": "5092c560-0f41-403a-a95c-f26b5e618b12", 00:24:47.228 "assigned_rate_limits": { 00:24:47.228 "rw_ios_per_sec": 0, 00:24:47.228 "rw_mbytes_per_sec": 0, 00:24:47.228 "r_mbytes_per_sec": 0, 00:24:47.228 "w_mbytes_per_sec": 0 00:24:47.228 }, 00:24:47.228 "claimed": true, 00:24:47.228 "claim_type": "exclusive_write", 00:24:47.228 "zoned": false, 00:24:47.228 "supported_io_types": { 00:24:47.228 "read": true, 00:24:47.228 "write": true, 00:24:47.228 "unmap": true, 00:24:47.228 "write_zeroes": true, 00:24:47.228 "flush": true, 00:24:47.228 "reset": true, 00:24:47.228 "compare": false, 00:24:47.228 "compare_and_write": false, 00:24:47.228 "abort": true, 00:24:47.228 "nvme_admin": false, 00:24:47.228 "nvme_io": false 00:24:47.228 }, 00:24:47.228 "memory_domains": [ 00:24:47.228 { 00:24:47.228 "dma_device_id": "system", 00:24:47.228 "dma_device_type": 1 00:24:47.228 }, 00:24:47.228 { 00:24:47.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.228 "dma_device_type": 2 00:24:47.228 } 00:24:47.228 ], 00:24:47.228 "driver_specific": {} 00:24:47.228 } 00:24:47.228 ] 00:24:47.228 01:54:47 -- common/autotest_common.sh@893 -- # return 0 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.228 01:54:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.486 01:54:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.486 "name": "Existed_Raid", 00:24:47.486 "uuid": "2be93277-93fa-4ffc-b7bb-ebdb8056a293", 00:24:47.486 "strip_size_kb": 64, 00:24:47.486 "state": "online", 00:24:47.486 "raid_level": "raid0", 00:24:47.486 "superblock": true, 00:24:47.486 "num_base_bdevs": 2, 00:24:47.486 "num_base_bdevs_discovered": 2, 00:24:47.486 "num_base_bdevs_operational": 2, 00:24:47.486 "base_bdevs_list": [ 00:24:47.486 { 00:24:47.486 "name": "BaseBdev1", 00:24:47.486 "uuid": "049c7c48-d288-42f5-9a7d-cc07a83fcf00", 00:24:47.486 "is_configured": true, 00:24:47.486 "data_offset": 2048, 00:24:47.486 "data_size": 63488 00:24:47.486 }, 00:24:47.486 { 00:24:47.486 "name": "BaseBdev2", 00:24:47.486 "uuid": "5092c560-0f41-403a-a95c-f26b5e618b12", 00:24:47.486 "is_configured": true, 00:24:47.486 "data_offset": 2048, 00:24:47.486 "data_size": 63488 00:24:47.486 } 00:24:47.486 ] 00:24:47.486 }' 00:24:47.486 01:54:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.486 01:54:47 -- common/autotest_common.sh@10 -- # set +x 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:48.420 [2024-04-24 01:54:48.334411] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:48.420 [2024-04-24 01:54:48.334453] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.420 [2024-04-24 01:54:48.334513] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.420 01:54:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.678 01:54:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.678 "name": "Existed_Raid", 00:24:48.678 "uuid": "2be93277-93fa-4ffc-b7bb-ebdb8056a293", 00:24:48.678 "strip_size_kb": 64, 00:24:48.678 "state": "offline", 00:24:48.678 "raid_level": "raid0", 00:24:48.678 "superblock": true, 00:24:48.678 "num_base_bdevs": 2, 00:24:48.678 "num_base_bdevs_discovered": 1, 00:24:48.678 "num_base_bdevs_operational": 1, 00:24:48.678 "base_bdevs_list": [ 00:24:48.678 { 00:24:48.678 "name": null, 00:24:48.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.678 "is_configured": false, 00:24:48.678 "data_offset": 2048, 00:24:48.678 "data_size": 63488 00:24:48.678 }, 00:24:48.678 { 00:24:48.678 "name": "BaseBdev2", 00:24:48.678 "uuid": "5092c560-0f41-403a-a95c-f26b5e618b12", 00:24:48.678 "is_configured": true, 00:24:48.678 "data_offset": 2048, 00:24:48.678 "data_size": 63488 00:24:48.678 } 00:24:48.678 ] 00:24:48.678 }' 00:24:48.678 01:54:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.678 01:54:48 -- common/autotest_common.sh@10 -- # set +x 00:24:49.244 01:54:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:49.244 01:54:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:49.244 01:54:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:49.244 01:54:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.811 01:54:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:49.811 01:54:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.811 01:54:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:49.811 [2024-04-24 01:54:49.872079] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:49.811 [2024-04-24 01:54:49.872159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:24:50.069 01:54:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:50.069 01:54:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:50.069 01:54:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.069 01:54:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:50.351 01:54:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:50.351 01:54:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:50.351 01:54:50 -- bdev/bdev_raid.sh@287 -- # killprocess 120681 00:24:50.351 01:54:50 -- common/autotest_common.sh@936 -- # '[' -z 120681 ']' 00:24:50.351 01:54:50 -- common/autotest_common.sh@940 -- # kill -0 120681 00:24:50.351 01:54:50 -- common/autotest_common.sh@941 -- # uname 00:24:50.351 01:54:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.351 01:54:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120681 00:24:50.351 01:54:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:50.351 01:54:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:50.351 01:54:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120681' 00:24:50.351 killing process with pid 120681 00:24:50.351 01:54:50 -- common/autotest_common.sh@955 -- # kill 120681 00:24:50.351 01:54:50 -- common/autotest_common.sh@960 -- # wait 120681 00:24:50.351 [2024-04-24 01:54:50.305527] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.351 [2024-04-24 01:54:50.305671] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:51.726 ************************************ 00:24:51.726 END TEST raid_state_function_test_sb 00:24:51.726 ************************************ 00:24:51.726 01:54:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:51.726 00:24:51.726 real 0m11.949s 00:24:51.726 user 0m20.033s 00:24:51.726 sys 0m1.665s 00:24:51.726 01:54:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.726 01:54:51 -- common/autotest_common.sh@10 -- # set +x 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:24:51.984 01:54:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:51.984 01:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.984 01:54:51 -- common/autotest_common.sh@10 -- # set +x 00:24:51.984 ************************************ 00:24:51.984 START TEST raid_superblock_test 00:24:51.984 ************************************ 00:24:51.984 01:54:51 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 2 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=121028 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121028 /var/tmp/spdk-raid.sock 00:24:51.984 01:54:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:51.984 01:54:51 -- common/autotest_common.sh@817 -- # '[' -z 121028 ']' 00:24:51.984 01:54:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:51.984 01:54:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:51.984 01:54:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:51.984 01:54:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.984 01:54:51 -- common/autotest_common.sh@10 -- # set +x 00:24:51.984 [2024-04-24 01:54:51.984835] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:24:51.984 [2024-04-24 01:54:51.985034] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121028 ] 00:24:52.243 [2024-04-24 01:54:52.163874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.501 [2024-04-24 01:54:52.468043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.759 [2024-04-24 01:54:52.731248] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.016 01:54:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.016 01:54:52 -- common/autotest_common.sh@850 -- # return 0 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.016 01:54:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:53.273 malloc1 00:24:53.274 01:54:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:53.531 [2024-04-24 01:54:53.520520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:53.531 [2024-04-24 01:54:53.520622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.531 [2024-04-24 01:54:53.520661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:53.531 [2024-04-24 01:54:53.520716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.531 [2024-04-24 01:54:53.523465] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.531 [2024-04-24 01:54:53.523518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:53.531 pt1 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.531 01:54:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:53.789 malloc2 00:24:53.789 01:54:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:54.355 [2024-04-24 01:54:54.140267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:54.355 [2024-04-24 01:54:54.140371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.355 [2024-04-24 01:54:54.140418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:54.355 [2024-04-24 01:54:54.140472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.355 [2024-04-24 01:54:54.143020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.355 [2024-04-24 01:54:54.143081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:54.355 pt2 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:24:54.355 [2024-04-24 01:54:54.348356] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:54.355 [2024-04-24 01:54:54.350693] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:54.355 [2024-04-24 01:54:54.350922] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:54.355 [2024-04-24 01:54:54.350942] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:54.355 [2024-04-24 01:54:54.351120] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:54.355 [2024-04-24 01:54:54.351501] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:54.355 [2024-04-24 01:54:54.351520] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:24:54.355 [2024-04-24 01:54:54.351691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.355 01:54:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.613 01:54:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.613 "name": "raid_bdev1", 00:24:54.613 "uuid": "d7f0a0a1-80b7-45c2-abd1-8e76a137cc25", 00:24:54.613 "strip_size_kb": 64, 00:24:54.613 "state": "online", 00:24:54.613 "raid_level": "raid0", 00:24:54.613 "superblock": true, 00:24:54.613 "num_base_bdevs": 2, 00:24:54.613 "num_base_bdevs_discovered": 2, 00:24:54.613 "num_base_bdevs_operational": 2, 00:24:54.613 "base_bdevs_list": [ 00:24:54.613 { 00:24:54.613 "name": "pt1", 00:24:54.613 "uuid": "bed1075c-d104-5cee-8878-749f2b8d8962", 00:24:54.613 "is_configured": true, 00:24:54.613 "data_offset": 2048, 00:24:54.613 "data_size": 63488 00:24:54.613 }, 00:24:54.613 { 00:24:54.613 "name": "pt2", 00:24:54.613 "uuid": "0c4ffd3e-f75b-5cdc-a441-12ecf06f863b", 00:24:54.613 "is_configured": true, 00:24:54.613 "data_offset": 2048, 00:24:54.613 "data_size": 63488 00:24:54.613 } 00:24:54.613 ] 00:24:54.613 }' 00:24:54.613 01:54:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.613 01:54:54 -- common/autotest_common.sh@10 -- # set +x 00:24:55.181 01:54:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:55.181 01:54:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.439 [2024-04-24 01:54:55.352772] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.439 01:54:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d7f0a0a1-80b7-45c2-abd1-8e76a137cc25 00:24:55.439 01:54:55 -- bdev/bdev_raid.sh@380 -- # '[' -z d7f0a0a1-80b7-45c2-abd1-8e76a137cc25 ']' 00:24:55.439 01:54:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:55.697 [2024-04-24 01:54:55.644594] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.697 [2024-04-24 01:54:55.644644] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:55.697 [2024-04-24 01:54:55.644720] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.697 [2024-04-24 01:54:55.644777] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:55.697 [2024-04-24 01:54:55.644789] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:24:55.697 01:54:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:55.697 01:54:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.955 01:54:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:55.955 01:54:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:55.955 01:54:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:55.955 01:54:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:56.213 01:54:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.213 01:54:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:56.471 01:54:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:56.471 01:54:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:56.729 01:54:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:56.729 01:54:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:56.729 01:54:56 -- common/autotest_common.sh@638 -- # local es=0 00:24:56.729 01:54:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:56.729 01:54:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.729 01:54:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:56.729 01:54:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.729 01:54:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:56.729 01:54:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.729 01:54:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:56.729 01:54:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.729 01:54:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:56.729 01:54:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:24:56.987 [2024-04-24 01:54:56.840827] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:56.987 [2024-04-24 01:54:56.843059] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:56.987 [2024-04-24 01:54:56.843150] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:56.987 [2024-04-24 01:54:56.843220] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:56.987 [2024-04-24 01:54:56.843253] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:56.987 [2024-04-24 01:54:56.843264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:24:56.987 request: 00:24:56.987 { 00:24:56.987 "name": "raid_bdev1", 00:24:56.987 "raid_level": "raid0", 00:24:56.987 "base_bdevs": [ 00:24:56.987 "malloc1", 00:24:56.987 "malloc2" 00:24:56.987 ], 00:24:56.987 "superblock": false, 00:24:56.987 "strip_size_kb": 64, 00:24:56.987 "method": "bdev_raid_create", 00:24:56.987 "req_id": 1 00:24:56.987 } 00:24:56.987 Got JSON-RPC error response 00:24:56.987 response: 00:24:56.987 { 00:24:56.987 "code": -17, 00:24:56.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:56.987 } 00:24:56.987 01:54:56 -- common/autotest_common.sh@641 -- # es=1 00:24:56.987 01:54:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:56.987 01:54:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:56.987 01:54:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:56.987 01:54:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.987 01:54:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:57.246 01:54:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:57.246 01:54:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:57.246 01:54:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:57.504 [2024-04-24 01:54:57.364860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:57.504 [2024-04-24 01:54:57.364975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.504 [2024-04-24 01:54:57.365031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:57.504 [2024-04-24 01:54:57.365061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.504 [2024-04-24 01:54:57.367661] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.504 [2024-04-24 01:54:57.367723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:57.504 [2024-04-24 01:54:57.367833] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:57.504 [2024-04-24 01:54:57.367885] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:57.504 pt1 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.504 01:54:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.771 01:54:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.771 "name": "raid_bdev1", 00:24:57.771 "uuid": "d7f0a0a1-80b7-45c2-abd1-8e76a137cc25", 00:24:57.771 "strip_size_kb": 64, 00:24:57.771 "state": "configuring", 00:24:57.771 "raid_level": "raid0", 00:24:57.771 "superblock": true, 00:24:57.771 "num_base_bdevs": 2, 00:24:57.771 "num_base_bdevs_discovered": 1, 00:24:57.771 "num_base_bdevs_operational": 2, 00:24:57.771 "base_bdevs_list": [ 00:24:57.771 { 00:24:57.771 "name": "pt1", 00:24:57.771 "uuid": "bed1075c-d104-5cee-8878-749f2b8d8962", 00:24:57.771 "is_configured": true, 00:24:57.771 "data_offset": 2048, 00:24:57.771 "data_size": 63488 00:24:57.771 }, 00:24:57.771 { 00:24:57.771 "name": null, 00:24:57.771 "uuid": "0c4ffd3e-f75b-5cdc-a441-12ecf06f863b", 00:24:57.772 "is_configured": false, 00:24:57.772 "data_offset": 2048, 00:24:57.772 "data_size": 63488 00:24:57.772 } 00:24:57.772 ] 00:24:57.772 }' 00:24:57.772 01:54:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.772 01:54:57 -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 01:54:58 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:24:58.360 01:54:58 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:58.360 01:54:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:58.360 01:54:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:58.619 [2024-04-24 01:54:58.541165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:58.619 [2024-04-24 01:54:58.541300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.619 [2024-04-24 01:54:58.541340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:58.619 [2024-04-24 01:54:58.541370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.619 [2024-04-24 01:54:58.541886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.619 [2024-04-24 01:54:58.541941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:58.619 [2024-04-24 01:54:58.542074] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:58.619 [2024-04-24 01:54:58.542099] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:58.619 [2024-04-24 01:54:58.542223] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:58.619 [2024-04-24 01:54:58.542241] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:58.619 [2024-04-24 01:54:58.542373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:58.619 [2024-04-24 01:54:58.542700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:58.619 [2024-04-24 01:54:58.542720] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:24:58.619 [2024-04-24 01:54:58.542865] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.619 pt2 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.619 01:54:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.877 01:54:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.877 "name": "raid_bdev1", 00:24:58.877 "uuid": "d7f0a0a1-80b7-45c2-abd1-8e76a137cc25", 00:24:58.877 "strip_size_kb": 64, 00:24:58.877 "state": "online", 00:24:58.877 "raid_level": "raid0", 00:24:58.877 "superblock": true, 00:24:58.877 "num_base_bdevs": 2, 00:24:58.877 "num_base_bdevs_discovered": 2, 00:24:58.877 "num_base_bdevs_operational": 2, 00:24:58.877 "base_bdevs_list": [ 00:24:58.877 { 00:24:58.877 "name": "pt1", 00:24:58.877 "uuid": "bed1075c-d104-5cee-8878-749f2b8d8962", 00:24:58.877 "is_configured": true, 00:24:58.877 "data_offset": 2048, 00:24:58.877 "data_size": 63488 00:24:58.877 }, 00:24:58.877 { 00:24:58.877 "name": "pt2", 00:24:58.877 "uuid": "0c4ffd3e-f75b-5cdc-a441-12ecf06f863b", 00:24:58.877 "is_configured": true, 00:24:58.877 "data_offset": 2048, 00:24:58.877 "data_size": 63488 00:24:58.877 } 00:24:58.877 ] 00:24:58.877 }' 00:24:58.877 01:54:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.877 01:54:58 -- common/autotest_common.sh@10 -- # set +x 00:24:59.442 01:54:59 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:59.442 01:54:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:59.700 [2024-04-24 01:54:59.729604] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.700 01:54:59 -- bdev/bdev_raid.sh@430 -- # '[' d7f0a0a1-80b7-45c2-abd1-8e76a137cc25 '!=' d7f0a0a1-80b7-45c2-abd1-8e76a137cc25 ']' 00:24:59.700 01:54:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:24:59.700 01:54:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:59.700 01:54:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:24:59.700 01:54:59 -- bdev/bdev_raid.sh@511 -- # killprocess 121028 00:24:59.700 01:54:59 -- common/autotest_common.sh@936 -- # '[' -z 121028 ']' 00:24:59.700 01:54:59 -- common/autotest_common.sh@940 -- # kill -0 121028 00:24:59.700 01:54:59 -- common/autotest_common.sh@941 -- # uname 00:24:59.700 01:54:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:59.700 01:54:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121028 00:24:59.700 01:54:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:59.700 01:54:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:59.700 01:54:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121028' 00:24:59.700 killing process with pid 121028 00:24:59.700 01:54:59 -- common/autotest_common.sh@955 -- # kill 121028 00:24:59.958 01:54:59 -- common/autotest_common.sh@960 -- # wait 121028 00:24:59.958 [2024-04-24 01:54:59.785799] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.958 [2024-04-24 01:54:59.785879] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.958 [2024-04-24 01:54:59.785931] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.958 [2024-04-24 01:54:59.785948] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:24:59.958 [2024-04-24 01:55:00.016259] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.971 ************************************ 00:25:01.971 END TEST raid_superblock_test 00:25:01.971 ************************************ 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:01.971 00:25:01.971 real 0m9.635s 00:25:01.971 user 0m15.979s 00:25:01.971 sys 0m1.218s 00:25:01.971 01:55:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:01.971 01:55:01 -- common/autotest_common.sh@10 -- # set +x 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:25:01.971 01:55:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:01.971 01:55:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:01.971 01:55:01 -- common/autotest_common.sh@10 -- # set +x 00:25:01.971 ************************************ 00:25:01.971 START TEST raid_state_function_test 00:25:01.971 ************************************ 00:25:01.971 01:55:01 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 false 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=121296 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121296' 00:25:01.971 Process raid pid: 121296 00:25:01.971 01:55:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121296 /var/tmp/spdk-raid.sock 00:25:01.971 01:55:01 -- common/autotest_common.sh@817 -- # '[' -z 121296 ']' 00:25:01.971 01:55:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:01.971 01:55:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:01.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:01.971 01:55:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:01.971 01:55:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:01.971 01:55:01 -- common/autotest_common.sh@10 -- # set +x 00:25:01.971 [2024-04-24 01:55:01.730379] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:01.971 [2024-04-24 01:55:01.730615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.971 [2024-04-24 01:55:01.914318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.229 [2024-04-24 01:55:02.224703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.487 [2024-04-24 01:55:02.465807] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.745 01:55:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.745 01:55:02 -- common/autotest_common.sh@850 -- # return 0 00:25:02.745 01:55:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:03.002 [2024-04-24 01:55:02.840004] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.002 [2024-04-24 01:55:02.840076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.002 [2024-04-24 01:55:02.840087] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.002 [2024-04-24 01:55:02.840104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.002 01:55:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.002 01:55:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.002 "name": "Existed_Raid", 00:25:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.002 "strip_size_kb": 64, 00:25:03.002 "state": "configuring", 00:25:03.002 "raid_level": "concat", 00:25:03.002 "superblock": false, 00:25:03.002 "num_base_bdevs": 2, 00:25:03.002 "num_base_bdevs_discovered": 0, 00:25:03.002 "num_base_bdevs_operational": 2, 00:25:03.002 "base_bdevs_list": [ 00:25:03.002 { 00:25:03.002 "name": "BaseBdev1", 00:25:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.002 "is_configured": false, 00:25:03.002 "data_offset": 0, 00:25:03.002 "data_size": 0 00:25:03.002 }, 00:25:03.002 { 00:25:03.002 "name": "BaseBdev2", 00:25:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.002 "is_configured": false, 00:25:03.002 "data_offset": 0, 00:25:03.002 "data_size": 0 00:25:03.002 } 00:25:03.002 ] 00:25:03.002 }' 00:25:03.002 01:55:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.002 01:55:03 -- common/autotest_common.sh@10 -- # set +x 00:25:03.710 01:55:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:03.710 [2024-04-24 01:55:03.784080] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.710 [2024-04-24 01:55:03.784141] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:25:03.969 01:55:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:03.969 [2024-04-24 01:55:03.972087] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.969 [2024-04-24 01:55:03.972186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.969 [2024-04-24 01:55:03.972197] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.969 [2024-04-24 01:55:03.972227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.969 01:55:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:04.535 [2024-04-24 01:55:04.340938] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.535 BaseBdev1 00:25:04.535 01:55:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:04.535 01:55:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:04.535 01:55:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:04.535 01:55:04 -- common/autotest_common.sh@887 -- # local i 00:25:04.535 01:55:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:04.535 01:55:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:04.535 01:55:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.535 01:55:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:04.793 [ 00:25:04.793 { 00:25:04.793 "name": "BaseBdev1", 00:25:04.793 "aliases": [ 00:25:04.793 "1e98d43e-2f16-4873-9b70-56c6d2607965" 00:25:04.793 ], 00:25:04.793 "product_name": "Malloc disk", 00:25:04.793 "block_size": 512, 00:25:04.793 "num_blocks": 65536, 00:25:04.793 "uuid": "1e98d43e-2f16-4873-9b70-56c6d2607965", 00:25:04.793 "assigned_rate_limits": { 00:25:04.793 "rw_ios_per_sec": 0, 00:25:04.793 "rw_mbytes_per_sec": 0, 00:25:04.793 "r_mbytes_per_sec": 0, 00:25:04.793 "w_mbytes_per_sec": 0 00:25:04.793 }, 00:25:04.793 "claimed": true, 00:25:04.793 "claim_type": "exclusive_write", 00:25:04.793 "zoned": false, 00:25:04.793 "supported_io_types": { 00:25:04.793 "read": true, 00:25:04.793 "write": true, 00:25:04.793 "unmap": true, 00:25:04.793 "write_zeroes": true, 00:25:04.793 "flush": true, 00:25:04.793 "reset": true, 00:25:04.793 "compare": false, 00:25:04.793 "compare_and_write": false, 00:25:04.793 "abort": true, 00:25:04.793 "nvme_admin": false, 00:25:04.793 "nvme_io": false 00:25:04.793 }, 00:25:04.793 "memory_domains": [ 00:25:04.793 { 00:25:04.793 "dma_device_id": "system", 00:25:04.793 "dma_device_type": 1 00:25:04.793 }, 00:25:04.793 { 00:25:04.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.793 "dma_device_type": 2 00:25:04.793 } 00:25:04.793 ], 00:25:04.793 "driver_specific": {} 00:25:04.793 } 00:25:04.793 ] 00:25:04.793 01:55:04 -- common/autotest_common.sh@893 -- # return 0 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.793 01:55:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.051 01:55:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.051 "name": "Existed_Raid", 00:25:05.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.051 "strip_size_kb": 64, 00:25:05.051 "state": "configuring", 00:25:05.051 "raid_level": "concat", 00:25:05.051 "superblock": false, 00:25:05.051 "num_base_bdevs": 2, 00:25:05.051 "num_base_bdevs_discovered": 1, 00:25:05.051 "num_base_bdevs_operational": 2, 00:25:05.051 "base_bdevs_list": [ 00:25:05.051 { 00:25:05.051 "name": "BaseBdev1", 00:25:05.051 "uuid": "1e98d43e-2f16-4873-9b70-56c6d2607965", 00:25:05.051 "is_configured": true, 00:25:05.051 "data_offset": 0, 00:25:05.051 "data_size": 65536 00:25:05.051 }, 00:25:05.051 { 00:25:05.051 "name": "BaseBdev2", 00:25:05.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.051 "is_configured": false, 00:25:05.051 "data_offset": 0, 00:25:05.051 "data_size": 0 00:25:05.051 } 00:25:05.051 ] 00:25:05.051 }' 00:25:05.051 01:55:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.051 01:55:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.618 01:55:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.877 [2024-04-24 01:55:05.789280] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.877 [2024-04-24 01:55:05.789344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:25:05.877 01:55:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:05.877 01:55:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:06.135 [2024-04-24 01:55:06.021370] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.135 [2024-04-24 01:55:06.023700] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.135 [2024-04-24 01:55:06.023772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.135 01:55:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.395 01:55:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.395 "name": "Existed_Raid", 00:25:06.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.395 "strip_size_kb": 64, 00:25:06.395 "state": "configuring", 00:25:06.395 "raid_level": "concat", 00:25:06.395 "superblock": false, 00:25:06.395 "num_base_bdevs": 2, 00:25:06.395 "num_base_bdevs_discovered": 1, 00:25:06.395 "num_base_bdevs_operational": 2, 00:25:06.395 "base_bdevs_list": [ 00:25:06.395 { 00:25:06.395 "name": "BaseBdev1", 00:25:06.395 "uuid": "1e98d43e-2f16-4873-9b70-56c6d2607965", 00:25:06.395 "is_configured": true, 00:25:06.395 "data_offset": 0, 00:25:06.395 "data_size": 65536 00:25:06.395 }, 00:25:06.395 { 00:25:06.395 "name": "BaseBdev2", 00:25:06.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.395 "is_configured": false, 00:25:06.395 "data_offset": 0, 00:25:06.395 "data_size": 0 00:25:06.395 } 00:25:06.395 ] 00:25:06.395 }' 00:25:06.395 01:55:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.395 01:55:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.962 01:55:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.220 [2024-04-24 01:55:07.231554] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.220 [2024-04-24 01:55:07.231617] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:07.220 [2024-04-24 01:55:07.231626] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:07.220 [2024-04-24 01:55:07.231792] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:25:07.220 [2024-04-24 01:55:07.232114] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:07.220 [2024-04-24 01:55:07.232142] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:25:07.220 [2024-04-24 01:55:07.232435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.220 BaseBdev2 00:25:07.220 01:55:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:07.220 01:55:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:07.220 01:55:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:07.220 01:55:07 -- common/autotest_common.sh@887 -- # local i 00:25:07.220 01:55:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:07.220 01:55:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:07.220 01:55:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.479 01:55:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:07.737 [ 00:25:07.737 { 00:25:07.737 "name": "BaseBdev2", 00:25:07.737 "aliases": [ 00:25:07.737 "dc0f793a-2aa6-4bcc-a549-4cea9289f95c" 00:25:07.737 ], 00:25:07.737 "product_name": "Malloc disk", 00:25:07.737 "block_size": 512, 00:25:07.737 "num_blocks": 65536, 00:25:07.737 "uuid": "dc0f793a-2aa6-4bcc-a549-4cea9289f95c", 00:25:07.737 "assigned_rate_limits": { 00:25:07.737 "rw_ios_per_sec": 0, 00:25:07.737 "rw_mbytes_per_sec": 0, 00:25:07.738 "r_mbytes_per_sec": 0, 00:25:07.738 "w_mbytes_per_sec": 0 00:25:07.738 }, 00:25:07.738 "claimed": true, 00:25:07.738 "claim_type": "exclusive_write", 00:25:07.738 "zoned": false, 00:25:07.738 "supported_io_types": { 00:25:07.738 "read": true, 00:25:07.738 "write": true, 00:25:07.738 "unmap": true, 00:25:07.738 "write_zeroes": true, 00:25:07.738 "flush": true, 00:25:07.738 "reset": true, 00:25:07.738 "compare": false, 00:25:07.738 "compare_and_write": false, 00:25:07.738 "abort": true, 00:25:07.738 "nvme_admin": false, 00:25:07.738 "nvme_io": false 00:25:07.738 }, 00:25:07.738 "memory_domains": [ 00:25:07.738 { 00:25:07.738 "dma_device_id": "system", 00:25:07.738 "dma_device_type": 1 00:25:07.738 }, 00:25:07.738 { 00:25:07.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.738 "dma_device_type": 2 00:25:07.738 } 00:25:07.738 ], 00:25:07.738 "driver_specific": {} 00:25:07.738 } 00:25:07.738 ] 00:25:07.738 01:55:07 -- common/autotest_common.sh@893 -- # return 0 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.738 01:55:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.103 01:55:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.103 "name": "Existed_Raid", 00:25:08.103 "uuid": "fcbb68f0-24fa-452f-827d-51eee768e144", 00:25:08.103 "strip_size_kb": 64, 00:25:08.104 "state": "online", 00:25:08.104 "raid_level": "concat", 00:25:08.104 "superblock": false, 00:25:08.104 "num_base_bdevs": 2, 00:25:08.104 "num_base_bdevs_discovered": 2, 00:25:08.104 "num_base_bdevs_operational": 2, 00:25:08.104 "base_bdevs_list": [ 00:25:08.104 { 00:25:08.104 "name": "BaseBdev1", 00:25:08.104 "uuid": "1e98d43e-2f16-4873-9b70-56c6d2607965", 00:25:08.104 "is_configured": true, 00:25:08.104 "data_offset": 0, 00:25:08.104 "data_size": 65536 00:25:08.104 }, 00:25:08.104 { 00:25:08.104 "name": "BaseBdev2", 00:25:08.104 "uuid": "dc0f793a-2aa6-4bcc-a549-4cea9289f95c", 00:25:08.104 "is_configured": true, 00:25:08.104 "data_offset": 0, 00:25:08.104 "data_size": 65536 00:25:08.104 } 00:25:08.104 ] 00:25:08.104 }' 00:25:08.104 01:55:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.104 01:55:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.696 01:55:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:08.954 [2024-04-24 01:55:09.017583] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.954 [2024-04-24 01:55:09.017629] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.954 [2024-04-24 01:55:09.017685] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.213 01:55:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.471 01:55:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.471 "name": "Existed_Raid", 00:25:09.471 "uuid": "fcbb68f0-24fa-452f-827d-51eee768e144", 00:25:09.471 "strip_size_kb": 64, 00:25:09.471 "state": "offline", 00:25:09.471 "raid_level": "concat", 00:25:09.472 "superblock": false, 00:25:09.472 "num_base_bdevs": 2, 00:25:09.472 "num_base_bdevs_discovered": 1, 00:25:09.472 "num_base_bdevs_operational": 1, 00:25:09.472 "base_bdevs_list": [ 00:25:09.472 { 00:25:09.472 "name": null, 00:25:09.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.472 "is_configured": false, 00:25:09.472 "data_offset": 0, 00:25:09.472 "data_size": 65536 00:25:09.472 }, 00:25:09.472 { 00:25:09.472 "name": "BaseBdev2", 00:25:09.472 "uuid": "dc0f793a-2aa6-4bcc-a549-4cea9289f95c", 00:25:09.472 "is_configured": true, 00:25:09.472 "data_offset": 0, 00:25:09.472 "data_size": 65536 00:25:09.472 } 00:25:09.472 ] 00:25:09.472 }' 00:25:09.472 01:55:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.472 01:55:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:10.419 01:55:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:10.677 [2024-04-24 01:55:10.656677] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:10.677 [2024-04-24 01:55:10.656770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:25:10.935 01:55:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:10.935 01:55:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:10.935 01:55:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:10.935 01:55:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.192 01:55:11 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:11.192 01:55:11 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:11.192 01:55:11 -- bdev/bdev_raid.sh@287 -- # killprocess 121296 00:25:11.192 01:55:11 -- common/autotest_common.sh@936 -- # '[' -z 121296 ']' 00:25:11.192 01:55:11 -- common/autotest_common.sh@940 -- # kill -0 121296 00:25:11.192 01:55:11 -- common/autotest_common.sh@941 -- # uname 00:25:11.192 01:55:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.192 01:55:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121296 00:25:11.192 01:55:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:11.193 01:55:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:11.193 01:55:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121296' 00:25:11.193 killing process with pid 121296 00:25:11.193 01:55:11 -- common/autotest_common.sh@955 -- # kill 121296 00:25:11.193 01:55:11 -- common/autotest_common.sh@960 -- # wait 121296 00:25:11.193 [2024-04-24 01:55:11.091025] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:11.193 [2024-04-24 01:55:11.091141] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.562 ************************************ 00:25:12.562 END TEST raid_state_function_test 00:25:12.562 ************************************ 00:25:12.562 01:55:12 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:12.562 00:25:12.562 real 0m10.929s 00:25:12.562 user 0m18.172s 00:25:12.562 sys 0m1.625s 00:25:12.562 01:55:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:12.562 01:55:12 -- common/autotest_common.sh@10 -- # set +x 00:25:12.562 01:55:12 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:25:12.562 01:55:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:12.562 01:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.562 01:55:12 -- common/autotest_common.sh@10 -- # set +x 00:25:12.819 ************************************ 00:25:12.819 START TEST raid_state_function_test_sb 00:25:12.819 ************************************ 00:25:12.819 01:55:12 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 true 00:25:12.819 01:55:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=121628 00:25:12.820 Process raid pid: 121628 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121628' 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121628 /var/tmp/spdk-raid.sock 00:25:12.820 01:55:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:12.820 01:55:12 -- common/autotest_common.sh@817 -- # '[' -z 121628 ']' 00:25:12.820 01:55:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:12.820 01:55:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:12.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:12.820 01:55:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:12.820 01:55:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:12.820 01:55:12 -- common/autotest_common.sh@10 -- # set +x 00:25:12.820 [2024-04-24 01:55:12.777979] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:12.820 [2024-04-24 01:55:12.778268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.077 [2024-04-24 01:55:12.972440] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.335 [2024-04-24 01:55:13.321106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.593 [2024-04-24 01:55:13.622858] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.850 01:55:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:13.850 01:55:13 -- common/autotest_common.sh@850 -- # return 0 00:25:13.850 01:55:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:14.108 [2024-04-24 01:55:14.053154] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:14.108 [2024-04-24 01:55:14.053427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:14.108 [2024-04-24 01:55:14.053532] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:14.108 [2024-04-24 01:55:14.053653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.108 01:55:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.366 01:55:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:14.366 "name": "Existed_Raid", 00:25:14.366 "uuid": "e197b390-1329-4c1e-a1f6-e03ca38f7f6a", 00:25:14.366 "strip_size_kb": 64, 00:25:14.366 "state": "configuring", 00:25:14.366 "raid_level": "concat", 00:25:14.366 "superblock": true, 00:25:14.366 "num_base_bdevs": 2, 00:25:14.366 "num_base_bdevs_discovered": 0, 00:25:14.366 "num_base_bdevs_operational": 2, 00:25:14.366 "base_bdevs_list": [ 00:25:14.366 { 00:25:14.366 "name": "BaseBdev1", 00:25:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.366 "is_configured": false, 00:25:14.366 "data_offset": 0, 00:25:14.366 "data_size": 0 00:25:14.366 }, 00:25:14.366 { 00:25:14.366 "name": "BaseBdev2", 00:25:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.366 "is_configured": false, 00:25:14.366 "data_offset": 0, 00:25:14.366 "data_size": 0 00:25:14.366 } 00:25:14.366 ] 00:25:14.366 }' 00:25:14.366 01:55:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:14.366 01:55:14 -- common/autotest_common.sh@10 -- # set +x 00:25:14.934 01:55:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:15.194 [2024-04-24 01:55:15.165210] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:15.194 [2024-04-24 01:55:15.165570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:25:15.194 01:55:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:15.452 [2024-04-24 01:55:15.457294] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:15.452 [2024-04-24 01:55:15.457691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:15.452 [2024-04-24 01:55:15.457793] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.452 [2024-04-24 01:55:15.457915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.452 01:55:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:15.711 [2024-04-24 01:55:15.787897] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.711 BaseBdev1 00:25:15.969 01:55:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:15.969 01:55:15 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:15.969 01:55:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:15.969 01:55:15 -- common/autotest_common.sh@887 -- # local i 00:25:15.969 01:55:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:15.969 01:55:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:15.969 01:55:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:15.969 01:55:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:16.227 [ 00:25:16.227 { 00:25:16.227 "name": "BaseBdev1", 00:25:16.227 "aliases": [ 00:25:16.227 "201b3b88-1e64-42df-be49-b4a61491d00d" 00:25:16.227 ], 00:25:16.227 "product_name": "Malloc disk", 00:25:16.227 "block_size": 512, 00:25:16.227 "num_blocks": 65536, 00:25:16.227 "uuid": "201b3b88-1e64-42df-be49-b4a61491d00d", 00:25:16.227 "assigned_rate_limits": { 00:25:16.227 "rw_ios_per_sec": 0, 00:25:16.227 "rw_mbytes_per_sec": 0, 00:25:16.227 "r_mbytes_per_sec": 0, 00:25:16.227 "w_mbytes_per_sec": 0 00:25:16.227 }, 00:25:16.227 "claimed": true, 00:25:16.227 "claim_type": "exclusive_write", 00:25:16.227 "zoned": false, 00:25:16.227 "supported_io_types": { 00:25:16.227 "read": true, 00:25:16.227 "write": true, 00:25:16.227 "unmap": true, 00:25:16.227 "write_zeroes": true, 00:25:16.227 "flush": true, 00:25:16.227 "reset": true, 00:25:16.227 "compare": false, 00:25:16.227 "compare_and_write": false, 00:25:16.227 "abort": true, 00:25:16.227 "nvme_admin": false, 00:25:16.227 "nvme_io": false 00:25:16.227 }, 00:25:16.227 "memory_domains": [ 00:25:16.227 { 00:25:16.227 "dma_device_id": "system", 00:25:16.227 "dma_device_type": 1 00:25:16.227 }, 00:25:16.227 { 00:25:16.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.227 "dma_device_type": 2 00:25:16.227 } 00:25:16.227 ], 00:25:16.227 "driver_specific": {} 00:25:16.227 } 00:25:16.227 ] 00:25:16.227 01:55:16 -- common/autotest_common.sh@893 -- # return 0 00:25:16.227 01:55:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:16.227 01:55:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.228 01:55:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.485 01:55:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.485 "name": "Existed_Raid", 00:25:16.485 "uuid": "a740c854-9552-4f9b-9292-101292b3a3dd", 00:25:16.485 "strip_size_kb": 64, 00:25:16.485 "state": "configuring", 00:25:16.485 "raid_level": "concat", 00:25:16.485 "superblock": true, 00:25:16.485 "num_base_bdevs": 2, 00:25:16.485 "num_base_bdevs_discovered": 1, 00:25:16.485 "num_base_bdevs_operational": 2, 00:25:16.485 "base_bdevs_list": [ 00:25:16.485 { 00:25:16.485 "name": "BaseBdev1", 00:25:16.485 "uuid": "201b3b88-1e64-42df-be49-b4a61491d00d", 00:25:16.485 "is_configured": true, 00:25:16.485 "data_offset": 2048, 00:25:16.485 "data_size": 63488 00:25:16.485 }, 00:25:16.485 { 00:25:16.485 "name": "BaseBdev2", 00:25:16.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.485 "is_configured": false, 00:25:16.485 "data_offset": 0, 00:25:16.485 "data_size": 0 00:25:16.485 } 00:25:16.485 ] 00:25:16.485 }' 00:25:16.485 01:55:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.485 01:55:16 -- common/autotest_common.sh@10 -- # set +x 00:25:17.051 01:55:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:17.310 [2024-04-24 01:55:17.324308] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:17.310 [2024-04-24 01:55:17.324663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:25:17.310 01:55:17 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:17.310 01:55:17 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:17.876 01:55:17 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:18.133 BaseBdev1 00:25:18.133 01:55:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:18.133 01:55:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:18.133 01:55:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:18.133 01:55:18 -- common/autotest_common.sh@887 -- # local i 00:25:18.133 01:55:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:18.133 01:55:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:18.133 01:55:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:18.133 01:55:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:18.390 [ 00:25:18.390 { 00:25:18.390 "name": "BaseBdev1", 00:25:18.390 "aliases": [ 00:25:18.391 "cbf51f83-54f7-488e-83ef-d92d7b4bdb3b" 00:25:18.391 ], 00:25:18.391 "product_name": "Malloc disk", 00:25:18.391 "block_size": 512, 00:25:18.391 "num_blocks": 65536, 00:25:18.391 "uuid": "cbf51f83-54f7-488e-83ef-d92d7b4bdb3b", 00:25:18.391 "assigned_rate_limits": { 00:25:18.391 "rw_ios_per_sec": 0, 00:25:18.391 "rw_mbytes_per_sec": 0, 00:25:18.391 "r_mbytes_per_sec": 0, 00:25:18.391 "w_mbytes_per_sec": 0 00:25:18.391 }, 00:25:18.391 "claimed": false, 00:25:18.391 "zoned": false, 00:25:18.391 "supported_io_types": { 00:25:18.391 "read": true, 00:25:18.391 "write": true, 00:25:18.391 "unmap": true, 00:25:18.391 "write_zeroes": true, 00:25:18.391 "flush": true, 00:25:18.391 "reset": true, 00:25:18.391 "compare": false, 00:25:18.391 "compare_and_write": false, 00:25:18.391 "abort": true, 00:25:18.391 "nvme_admin": false, 00:25:18.391 "nvme_io": false 00:25:18.391 }, 00:25:18.391 "memory_domains": [ 00:25:18.391 { 00:25:18.391 "dma_device_id": "system", 00:25:18.391 "dma_device_type": 1 00:25:18.391 }, 00:25:18.391 { 00:25:18.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.391 "dma_device_type": 2 00:25:18.391 } 00:25:18.391 ], 00:25:18.391 "driver_specific": {} 00:25:18.391 } 00:25:18.391 ] 00:25:18.391 01:55:18 -- common/autotest_common.sh@893 -- # return 0 00:25:18.391 01:55:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:18.649 [2024-04-24 01:55:18.689186] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.649 [2024-04-24 01:55:18.691662] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:18.649 [2024-04-24 01:55:18.691727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:18.649 01:55:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:18.649 01:55:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:18.649 01:55:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.650 01:55:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.217 01:55:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.217 "name": "Existed_Raid", 00:25:19.217 "uuid": "79f9dd3f-c958-4f85-b336-22755b80b3e2", 00:25:19.217 "strip_size_kb": 64, 00:25:19.217 "state": "configuring", 00:25:19.217 "raid_level": "concat", 00:25:19.217 "superblock": true, 00:25:19.217 "num_base_bdevs": 2, 00:25:19.217 "num_base_bdevs_discovered": 1, 00:25:19.217 "num_base_bdevs_operational": 2, 00:25:19.217 "base_bdevs_list": [ 00:25:19.217 { 00:25:19.217 "name": "BaseBdev1", 00:25:19.217 "uuid": "cbf51f83-54f7-488e-83ef-d92d7b4bdb3b", 00:25:19.217 "is_configured": true, 00:25:19.217 "data_offset": 2048, 00:25:19.217 "data_size": 63488 00:25:19.217 }, 00:25:19.217 { 00:25:19.217 "name": "BaseBdev2", 00:25:19.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.217 "is_configured": false, 00:25:19.217 "data_offset": 0, 00:25:19.217 "data_size": 0 00:25:19.217 } 00:25:19.217 ] 00:25:19.217 }' 00:25:19.217 01:55:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.217 01:55:18 -- common/autotest_common.sh@10 -- # set +x 00:25:19.784 01:55:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.042 [2024-04-24 01:55:19.906731] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.042 [2024-04-24 01:55:19.907027] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:20.042 [2024-04-24 01:55:19.907043] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:20.042 [2024-04-24 01:55:19.907216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:20.042 [2024-04-24 01:55:19.907571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:20.042 [2024-04-24 01:55:19.907591] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:25:20.042 [2024-04-24 01:55:19.907753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.042 BaseBdev2 00:25:20.042 01:55:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:20.042 01:55:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:20.042 01:55:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:20.042 01:55:19 -- common/autotest_common.sh@887 -- # local i 00:25:20.042 01:55:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:20.042 01:55:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:20.042 01:55:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.299 01:55:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:20.557 [ 00:25:20.557 { 00:25:20.557 "name": "BaseBdev2", 00:25:20.557 "aliases": [ 00:25:20.557 "3db44729-e7d9-4127-9399-a980039263b7" 00:25:20.557 ], 00:25:20.557 "product_name": "Malloc disk", 00:25:20.557 "block_size": 512, 00:25:20.557 "num_blocks": 65536, 00:25:20.557 "uuid": "3db44729-e7d9-4127-9399-a980039263b7", 00:25:20.557 "assigned_rate_limits": { 00:25:20.557 "rw_ios_per_sec": 0, 00:25:20.557 "rw_mbytes_per_sec": 0, 00:25:20.557 "r_mbytes_per_sec": 0, 00:25:20.557 "w_mbytes_per_sec": 0 00:25:20.557 }, 00:25:20.557 "claimed": true, 00:25:20.557 "claim_type": "exclusive_write", 00:25:20.557 "zoned": false, 00:25:20.557 "supported_io_types": { 00:25:20.557 "read": true, 00:25:20.557 "write": true, 00:25:20.557 "unmap": true, 00:25:20.557 "write_zeroes": true, 00:25:20.557 "flush": true, 00:25:20.557 "reset": true, 00:25:20.557 "compare": false, 00:25:20.557 "compare_and_write": false, 00:25:20.557 "abort": true, 00:25:20.557 "nvme_admin": false, 00:25:20.557 "nvme_io": false 00:25:20.557 }, 00:25:20.557 "memory_domains": [ 00:25:20.557 { 00:25:20.557 "dma_device_id": "system", 00:25:20.557 "dma_device_type": 1 00:25:20.557 }, 00:25:20.557 { 00:25:20.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.557 "dma_device_type": 2 00:25:20.557 } 00:25:20.557 ], 00:25:20.557 "driver_specific": {} 00:25:20.557 } 00:25:20.557 ] 00:25:20.557 01:55:20 -- common/autotest_common.sh@893 -- # return 0 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.557 01:55:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.815 01:55:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.815 "name": "Existed_Raid", 00:25:20.815 "uuid": "79f9dd3f-c958-4f85-b336-22755b80b3e2", 00:25:20.815 "strip_size_kb": 64, 00:25:20.815 "state": "online", 00:25:20.815 "raid_level": "concat", 00:25:20.815 "superblock": true, 00:25:20.815 "num_base_bdevs": 2, 00:25:20.815 "num_base_bdevs_discovered": 2, 00:25:20.815 "num_base_bdevs_operational": 2, 00:25:20.815 "base_bdevs_list": [ 00:25:20.815 { 00:25:20.815 "name": "BaseBdev1", 00:25:20.815 "uuid": "cbf51f83-54f7-488e-83ef-d92d7b4bdb3b", 00:25:20.815 "is_configured": true, 00:25:20.815 "data_offset": 2048, 00:25:20.815 "data_size": 63488 00:25:20.815 }, 00:25:20.815 { 00:25:20.815 "name": "BaseBdev2", 00:25:20.815 "uuid": "3db44729-e7d9-4127-9399-a980039263b7", 00:25:20.815 "is_configured": true, 00:25:20.815 "data_offset": 2048, 00:25:20.815 "data_size": 63488 00:25:20.815 } 00:25:20.815 ] 00:25:20.815 }' 00:25:20.815 01:55:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.815 01:55:20 -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 01:55:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:22.007 [2024-04-24 01:55:21.839245] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:22.007 [2024-04-24 01:55:21.839284] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.007 [2024-04-24 01:55:21.839340] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.007 01:55:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.265 01:55:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:22.265 "name": "Existed_Raid", 00:25:22.265 "uuid": "79f9dd3f-c958-4f85-b336-22755b80b3e2", 00:25:22.265 "strip_size_kb": 64, 00:25:22.265 "state": "offline", 00:25:22.265 "raid_level": "concat", 00:25:22.265 "superblock": true, 00:25:22.265 "num_base_bdevs": 2, 00:25:22.265 "num_base_bdevs_discovered": 1, 00:25:22.265 "num_base_bdevs_operational": 1, 00:25:22.265 "base_bdevs_list": [ 00:25:22.265 { 00:25:22.265 "name": null, 00:25:22.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.265 "is_configured": false, 00:25:22.265 "data_offset": 2048, 00:25:22.265 "data_size": 63488 00:25:22.265 }, 00:25:22.265 { 00:25:22.265 "name": "BaseBdev2", 00:25:22.265 "uuid": "3db44729-e7d9-4127-9399-a980039263b7", 00:25:22.265 "is_configured": true, 00:25:22.265 "data_offset": 2048, 00:25:22.265 "data_size": 63488 00:25:22.265 } 00:25:22.265 ] 00:25:22.265 }' 00:25:22.265 01:55:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:22.265 01:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.845 01:55:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:22.845 01:55:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:22.845 01:55:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.845 01:55:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:23.136 01:55:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:23.136 01:55:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:23.136 01:55:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:23.436 [2024-04-24 01:55:23.228171] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:23.437 [2024-04-24 01:55:23.228251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:25:23.437 01:55:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:23.437 01:55:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:23.437 01:55:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.437 01:55:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:23.699 01:55:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:23.699 01:55:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:23.699 01:55:23 -- bdev/bdev_raid.sh@287 -- # killprocess 121628 00:25:23.699 01:55:23 -- common/autotest_common.sh@936 -- # '[' -z 121628 ']' 00:25:23.699 01:55:23 -- common/autotest_common.sh@940 -- # kill -0 121628 00:25:23.699 01:55:23 -- common/autotest_common.sh@941 -- # uname 00:25:23.699 01:55:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:23.699 01:55:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121628 00:25:23.699 01:55:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:23.699 killing process with pid 121628 00:25:23.699 01:55:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:23.699 01:55:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121628' 00:25:23.699 01:55:23 -- common/autotest_common.sh@955 -- # kill 121628 00:25:23.699 [2024-04-24 01:55:23.671793] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.699 01:55:23 -- common/autotest_common.sh@960 -- # wait 121628 00:25:23.699 [2024-04-24 01:55:23.671947] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.078 ************************************ 00:25:25.078 END TEST raid_state_function_test_sb 00:25:25.078 ************************************ 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:25.078 00:25:25.078 real 0m12.345s 00:25:25.078 user 0m20.473s 00:25:25.078 sys 0m2.031s 00:25:25.078 01:55:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:25.078 01:55:25 -- common/autotest_common.sh@10 -- # set +x 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:25:25.078 01:55:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:25.078 01:55:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:25.078 01:55:25 -- common/autotest_common.sh@10 -- # set +x 00:25:25.078 ************************************ 00:25:25.078 START TEST raid_superblock_test 00:25:25.078 ************************************ 00:25:25.078 01:55:25 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 2 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:25:25.078 01:55:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:25.079 01:55:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:25.079 01:55:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=121980 00:25:25.079 01:55:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121980 /var/tmp/spdk-raid.sock 00:25:25.079 01:55:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:25.079 01:55:25 -- common/autotest_common.sh@817 -- # '[' -z 121980 ']' 00:25:25.079 01:55:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:25.079 01:55:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.079 01:55:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:25.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:25.079 01:55:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.079 01:55:25 -- common/autotest_common.sh@10 -- # set +x 00:25:25.338 [2024-04-24 01:55:25.217164] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:25.338 [2024-04-24 01:55:25.217340] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121980 ] 00:25:25.338 [2024-04-24 01:55:25.409424] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.597 [2024-04-24 01:55:25.625910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.856 [2024-04-24 01:55:25.859778] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.116 01:55:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.116 01:55:26 -- common/autotest_common.sh@850 -- # return 0 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:26.116 01:55:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:26.374 malloc1 00:25:26.374 01:55:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:26.631 [2024-04-24 01:55:26.569638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:26.631 [2024-04-24 01:55:26.569766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.631 [2024-04-24 01:55:26.569801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:26.631 [2024-04-24 01:55:26.569845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.631 [2024-04-24 01:55:26.572367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.631 [2024-04-24 01:55:26.572420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:26.631 pt1 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:26.631 01:55:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:26.889 malloc2 00:25:26.889 01:55:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.148 [2024-04-24 01:55:27.151259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.148 [2024-04-24 01:55:27.151345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.148 [2024-04-24 01:55:27.151386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:27.148 [2024-04-24 01:55:27.151436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.148 [2024-04-24 01:55:27.153758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.148 [2024-04-24 01:55:27.153805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.148 pt2 00:25:27.148 01:55:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:27.148 01:55:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:27.148 01:55:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:25:27.407 [2024-04-24 01:55:27.407355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:27.407 [2024-04-24 01:55:27.409674] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.407 [2024-04-24 01:55:27.409890] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:27.407 [2024-04-24 01:55:27.409903] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:27.407 [2024-04-24 01:55:27.410086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:27.407 [2024-04-24 01:55:27.410442] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:27.407 [2024-04-24 01:55:27.410462] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:27.407 [2024-04-24 01:55:27.410621] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.407 01:55:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.664 01:55:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.664 "name": "raid_bdev1", 00:25:27.664 "uuid": "8f1444b7-1771-4980-8ec5-511830133b09", 00:25:27.664 "strip_size_kb": 64, 00:25:27.664 "state": "online", 00:25:27.664 "raid_level": "concat", 00:25:27.664 "superblock": true, 00:25:27.664 "num_base_bdevs": 2, 00:25:27.664 "num_base_bdevs_discovered": 2, 00:25:27.664 "num_base_bdevs_operational": 2, 00:25:27.664 "base_bdevs_list": [ 00:25:27.664 { 00:25:27.664 "name": "pt1", 00:25:27.664 "uuid": "f25457d9-40f7-58c6-b0d2-0caaf8c53749", 00:25:27.664 "is_configured": true, 00:25:27.664 "data_offset": 2048, 00:25:27.664 "data_size": 63488 00:25:27.664 }, 00:25:27.664 { 00:25:27.664 "name": "pt2", 00:25:27.664 "uuid": "7bbd5864-95cd-5557-b0c8-19e22493e06f", 00:25:27.665 "is_configured": true, 00:25:27.665 "data_offset": 2048, 00:25:27.665 "data_size": 63488 00:25:27.665 } 00:25:27.665 ] 00:25:27.665 }' 00:25:27.665 01:55:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.665 01:55:27 -- common/autotest_common.sh@10 -- # set +x 00:25:28.279 01:55:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:28.279 01:55:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:28.537 [2024-04-24 01:55:28.351697] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.537 01:55:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8f1444b7-1771-4980-8ec5-511830133b09 00:25:28.537 01:55:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 8f1444b7-1771-4980-8ec5-511830133b09 ']' 00:25:28.537 01:55:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:28.794 [2024-04-24 01:55:28.631471] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.794 [2024-04-24 01:55:28.631498] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.794 [2024-04-24 01:55:28.631567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.794 [2024-04-24 01:55:28.631618] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.794 [2024-04-24 01:55:28.631628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:28.794 01:55:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:28.794 01:55:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.052 01:55:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:29.052 01:55:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:29.052 01:55:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:29.052 01:55:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:29.052 01:55:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:29.052 01:55:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:29.311 01:55:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:29.311 01:55:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:29.876 01:55:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:29.876 01:55:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.876 01:55:29 -- common/autotest_common.sh@638 -- # local es=0 00:25:29.876 01:55:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.876 01:55:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.876 01:55:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:29.876 01:55:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.876 01:55:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:29.876 01:55:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.876 01:55:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:29.876 01:55:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.876 01:55:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:29.876 01:55:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.876 [2024-04-24 01:55:29.915729] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:29.876 [2024-04-24 01:55:29.917685] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:29.876 [2024-04-24 01:55:29.917755] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:29.876 [2024-04-24 01:55:29.917824] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:29.876 [2024-04-24 01:55:29.917854] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:29.876 [2024-04-24 01:55:29.917864] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:25:29.876 request: 00:25:29.876 { 00:25:29.876 "name": "raid_bdev1", 00:25:29.876 "raid_level": "concat", 00:25:29.876 "base_bdevs": [ 00:25:29.876 "malloc1", 00:25:29.876 "malloc2" 00:25:29.876 ], 00:25:29.876 "superblock": false, 00:25:29.876 "strip_size_kb": 64, 00:25:29.876 "method": "bdev_raid_create", 00:25:29.876 "req_id": 1 00:25:29.876 } 00:25:29.876 Got JSON-RPC error response 00:25:29.876 response: 00:25:29.876 { 00:25:29.876 "code": -17, 00:25:29.876 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:29.876 } 00:25:29.876 01:55:29 -- common/autotest_common.sh@641 -- # es=1 00:25:29.876 01:55:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:29.876 01:55:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:29.876 01:55:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:29.876 01:55:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.876 01:55:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:30.134 01:55:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:30.134 01:55:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:30.134 01:55:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:30.393 [2024-04-24 01:55:30.391785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:30.393 [2024-04-24 01:55:30.391885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.393 [2024-04-24 01:55:30.391937] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:30.393 [2024-04-24 01:55:30.391964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.393 [2024-04-24 01:55:30.394501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.393 [2024-04-24 01:55:30.394558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:30.393 [2024-04-24 01:55:30.394680] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:30.393 [2024-04-24 01:55:30.394727] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:30.393 pt1 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.393 01:55:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.651 01:55:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.651 "name": "raid_bdev1", 00:25:30.651 "uuid": "8f1444b7-1771-4980-8ec5-511830133b09", 00:25:30.651 "strip_size_kb": 64, 00:25:30.651 "state": "configuring", 00:25:30.651 "raid_level": "concat", 00:25:30.651 "superblock": true, 00:25:30.651 "num_base_bdevs": 2, 00:25:30.651 "num_base_bdevs_discovered": 1, 00:25:30.651 "num_base_bdevs_operational": 2, 00:25:30.651 "base_bdevs_list": [ 00:25:30.651 { 00:25:30.651 "name": "pt1", 00:25:30.651 "uuid": "f25457d9-40f7-58c6-b0d2-0caaf8c53749", 00:25:30.651 "is_configured": true, 00:25:30.651 "data_offset": 2048, 00:25:30.651 "data_size": 63488 00:25:30.651 }, 00:25:30.651 { 00:25:30.651 "name": null, 00:25:30.651 "uuid": "7bbd5864-95cd-5557-b0c8-19e22493e06f", 00:25:30.651 "is_configured": false, 00:25:30.651 "data_offset": 2048, 00:25:30.651 "data_size": 63488 00:25:30.651 } 00:25:30.651 ] 00:25:30.651 }' 00:25:30.651 01:55:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.651 01:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.217 01:55:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:25:31.217 01:55:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:31.217 01:55:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:31.217 01:55:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:31.476 [2024-04-24 01:55:31.440058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:31.476 [2024-04-24 01:55:31.440184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.476 [2024-04-24 01:55:31.440222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:31.476 [2024-04-24 01:55:31.440249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.476 [2024-04-24 01:55:31.440748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.476 [2024-04-24 01:55:31.440790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:31.476 [2024-04-24 01:55:31.440894] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:31.476 [2024-04-24 01:55:31.440917] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:31.476 [2024-04-24 01:55:31.441024] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:31.476 [2024-04-24 01:55:31.441033] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:31.476 [2024-04-24 01:55:31.441164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:31.476 [2024-04-24 01:55:31.441483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:31.476 [2024-04-24 01:55:31.441510] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:25:31.476 [2024-04-24 01:55:31.441656] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.476 pt2 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.476 01:55:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.735 01:55:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.735 "name": "raid_bdev1", 00:25:31.735 "uuid": "8f1444b7-1771-4980-8ec5-511830133b09", 00:25:31.735 "strip_size_kb": 64, 00:25:31.735 "state": "online", 00:25:31.735 "raid_level": "concat", 00:25:31.735 "superblock": true, 00:25:31.735 "num_base_bdevs": 2, 00:25:31.735 "num_base_bdevs_discovered": 2, 00:25:31.735 "num_base_bdevs_operational": 2, 00:25:31.735 "base_bdevs_list": [ 00:25:31.735 { 00:25:31.735 "name": "pt1", 00:25:31.735 "uuid": "f25457d9-40f7-58c6-b0d2-0caaf8c53749", 00:25:31.735 "is_configured": true, 00:25:31.735 "data_offset": 2048, 00:25:31.735 "data_size": 63488 00:25:31.735 }, 00:25:31.735 { 00:25:31.735 "name": "pt2", 00:25:31.735 "uuid": "7bbd5864-95cd-5557-b0c8-19e22493e06f", 00:25:31.735 "is_configured": true, 00:25:31.735 "data_offset": 2048, 00:25:31.735 "data_size": 63488 00:25:31.735 } 00:25:31.735 ] 00:25:31.735 }' 00:25:31.735 01:55:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.735 01:55:31 -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 01:55:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:32.558 01:55:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:32.903 [2024-04-24 01:55:32.704771] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.903 01:55:32 -- bdev/bdev_raid.sh@430 -- # '[' 8f1444b7-1771-4980-8ec5-511830133b09 '!=' 8f1444b7-1771-4980-8ec5-511830133b09 ']' 00:25:32.903 01:55:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:25:32.903 01:55:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:32.903 01:55:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:25:32.903 01:55:32 -- bdev/bdev_raid.sh@511 -- # killprocess 121980 00:25:32.903 01:55:32 -- common/autotest_common.sh@936 -- # '[' -z 121980 ']' 00:25:32.903 01:55:32 -- common/autotest_common.sh@940 -- # kill -0 121980 00:25:32.903 01:55:32 -- common/autotest_common.sh@941 -- # uname 00:25:32.903 01:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:32.903 01:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121980 00:25:32.903 killing process with pid 121980 00:25:32.903 01:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:32.903 01:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:32.903 01:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121980' 00:25:32.903 01:55:32 -- common/autotest_common.sh@955 -- # kill 121980 00:25:32.903 01:55:32 -- common/autotest_common.sh@960 -- # wait 121980 00:25:32.903 [2024-04-24 01:55:32.761564] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:32.903 [2024-04-24 01:55:32.761661] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.903 [2024-04-24 01:55:32.761726] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.903 [2024-04-24 01:55:32.761739] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:25:33.217 [2024-04-24 01:55:33.051862] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.593 ************************************ 00:25:34.593 END TEST raid_superblock_test 00:25:34.593 ************************************ 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:34.593 00:25:34.593 real 0m9.286s 00:25:34.593 user 0m15.215s 00:25:34.593 sys 0m1.359s 00:25:34.593 01:55:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:34.593 01:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:25:34.593 01:55:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:34.593 01:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:34.593 01:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:34.593 ************************************ 00:25:34.593 START TEST raid_state_function_test 00:25:34.593 ************************************ 00:25:34.593 01:55:34 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 false 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=122241 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122241' 00:25:34.593 Process raid pid: 122241 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:34.593 01:55:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122241 /var/tmp/spdk-raid.sock 00:25:34.593 01:55:34 -- common/autotest_common.sh@817 -- # '[' -z 122241 ']' 00:25:34.593 01:55:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:34.593 01:55:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:34.593 01:55:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:34.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:34.593 01:55:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:34.593 01:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:34.593 [2024-04-24 01:55:34.617119] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:34.593 [2024-04-24 01:55:34.617476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.851 [2024-04-24 01:55:34.778684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.109 [2024-04-24 01:55:35.011399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.367 [2024-04-24 01:55:35.263473] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.625 01:55:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:35.625 01:55:35 -- common/autotest_common.sh@850 -- # return 0 00:25:35.625 01:55:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:35.884 [2024-04-24 01:55:35.854625] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:35.884 [2024-04-24 01:55:35.854893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:35.884 [2024-04-24 01:55:35.854981] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:35.884 [2024-04-24 01:55:35.855130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.884 01:55:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.142 01:55:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:36.143 "name": "Existed_Raid", 00:25:36.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.143 "strip_size_kb": 0, 00:25:36.143 "state": "configuring", 00:25:36.143 "raid_level": "raid1", 00:25:36.143 "superblock": false, 00:25:36.143 "num_base_bdevs": 2, 00:25:36.143 "num_base_bdevs_discovered": 0, 00:25:36.143 "num_base_bdevs_operational": 2, 00:25:36.143 "base_bdevs_list": [ 00:25:36.143 { 00:25:36.143 "name": "BaseBdev1", 00:25:36.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.143 "is_configured": false, 00:25:36.143 "data_offset": 0, 00:25:36.143 "data_size": 0 00:25:36.143 }, 00:25:36.143 { 00:25:36.143 "name": "BaseBdev2", 00:25:36.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.143 "is_configured": false, 00:25:36.143 "data_offset": 0, 00:25:36.143 "data_size": 0 00:25:36.143 } 00:25:36.143 ] 00:25:36.143 }' 00:25:36.143 01:55:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:36.143 01:55:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.708 01:55:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:36.966 [2024-04-24 01:55:36.834706] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:36.966 [2024-04-24 01:55:36.834939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:25:36.966 01:55:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:37.224 [2024-04-24 01:55:37.086784] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:37.224 [2024-04-24 01:55:37.087103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:37.224 [2024-04-24 01:55:37.087297] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:37.224 [2024-04-24 01:55:37.087377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:37.224 01:55:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:37.483 [2024-04-24 01:55:37.359526] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.483 BaseBdev1 00:25:37.483 01:55:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:37.483 01:55:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:37.483 01:55:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:37.483 01:55:37 -- common/autotest_common.sh@887 -- # local i 00:25:37.483 01:55:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:37.483 01:55:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:37.483 01:55:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.742 01:55:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:38.016 [ 00:25:38.016 { 00:25:38.016 "name": "BaseBdev1", 00:25:38.016 "aliases": [ 00:25:38.016 "caa1061c-b214-4e67-a83d-09e5df0c9b38" 00:25:38.016 ], 00:25:38.016 "product_name": "Malloc disk", 00:25:38.016 "block_size": 512, 00:25:38.016 "num_blocks": 65536, 00:25:38.016 "uuid": "caa1061c-b214-4e67-a83d-09e5df0c9b38", 00:25:38.016 "assigned_rate_limits": { 00:25:38.016 "rw_ios_per_sec": 0, 00:25:38.016 "rw_mbytes_per_sec": 0, 00:25:38.016 "r_mbytes_per_sec": 0, 00:25:38.016 "w_mbytes_per_sec": 0 00:25:38.016 }, 00:25:38.016 "claimed": true, 00:25:38.016 "claim_type": "exclusive_write", 00:25:38.016 "zoned": false, 00:25:38.016 "supported_io_types": { 00:25:38.016 "read": true, 00:25:38.016 "write": true, 00:25:38.016 "unmap": true, 00:25:38.016 "write_zeroes": true, 00:25:38.016 "flush": true, 00:25:38.016 "reset": true, 00:25:38.016 "compare": false, 00:25:38.016 "compare_and_write": false, 00:25:38.016 "abort": true, 00:25:38.016 "nvme_admin": false, 00:25:38.016 "nvme_io": false 00:25:38.016 }, 00:25:38.016 "memory_domains": [ 00:25:38.016 { 00:25:38.016 "dma_device_id": "system", 00:25:38.016 "dma_device_type": 1 00:25:38.016 }, 00:25:38.016 { 00:25:38.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.016 "dma_device_type": 2 00:25:38.016 } 00:25:38.016 ], 00:25:38.016 "driver_specific": {} 00:25:38.016 } 00:25:38.016 ] 00:25:38.016 01:55:37 -- common/autotest_common.sh@893 -- # return 0 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.016 01:55:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.275 01:55:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:38.275 "name": "Existed_Raid", 00:25:38.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.275 "strip_size_kb": 0, 00:25:38.275 "state": "configuring", 00:25:38.275 "raid_level": "raid1", 00:25:38.275 "superblock": false, 00:25:38.275 "num_base_bdevs": 2, 00:25:38.275 "num_base_bdevs_discovered": 1, 00:25:38.275 "num_base_bdevs_operational": 2, 00:25:38.275 "base_bdevs_list": [ 00:25:38.275 { 00:25:38.275 "name": "BaseBdev1", 00:25:38.275 "uuid": "caa1061c-b214-4e67-a83d-09e5df0c9b38", 00:25:38.275 "is_configured": true, 00:25:38.275 "data_offset": 0, 00:25:38.275 "data_size": 65536 00:25:38.275 }, 00:25:38.275 { 00:25:38.275 "name": "BaseBdev2", 00:25:38.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.275 "is_configured": false, 00:25:38.275 "data_offset": 0, 00:25:38.275 "data_size": 0 00:25:38.275 } 00:25:38.275 ] 00:25:38.275 }' 00:25:38.275 01:55:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:38.275 01:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:38.843 01:55:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:39.101 [2024-04-24 01:55:39.039926] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:39.101 [2024-04-24 01:55:39.040171] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:25:39.101 01:55:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:39.101 01:55:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:39.360 [2024-04-24 01:55:39.307995] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:39.360 [2024-04-24 01:55:39.310333] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:39.360 [2024-04-24 01:55:39.310518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.360 01:55:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.618 01:55:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:39.618 "name": "Existed_Raid", 00:25:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.618 "strip_size_kb": 0, 00:25:39.618 "state": "configuring", 00:25:39.618 "raid_level": "raid1", 00:25:39.618 "superblock": false, 00:25:39.618 "num_base_bdevs": 2, 00:25:39.618 "num_base_bdevs_discovered": 1, 00:25:39.618 "num_base_bdevs_operational": 2, 00:25:39.618 "base_bdevs_list": [ 00:25:39.618 { 00:25:39.618 "name": "BaseBdev1", 00:25:39.618 "uuid": "caa1061c-b214-4e67-a83d-09e5df0c9b38", 00:25:39.618 "is_configured": true, 00:25:39.618 "data_offset": 0, 00:25:39.618 "data_size": 65536 00:25:39.618 }, 00:25:39.618 { 00:25:39.618 "name": "BaseBdev2", 00:25:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.618 "is_configured": false, 00:25:39.618 "data_offset": 0, 00:25:39.618 "data_size": 0 00:25:39.618 } 00:25:39.618 ] 00:25:39.618 }' 00:25:39.618 01:55:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:39.618 01:55:39 -- common/autotest_common.sh@10 -- # set +x 00:25:40.192 01:55:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:40.451 [2024-04-24 01:55:40.470960] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.451 [2024-04-24 01:55:40.471214] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:40.451 [2024-04-24 01:55:40.471256] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:40.451 [2024-04-24 01:55:40.471469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:25:40.451 [2024-04-24 01:55:40.471863] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:40.451 [2024-04-24 01:55:40.471972] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:25:40.451 [2024-04-24 01:55:40.472315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.451 BaseBdev2 00:25:40.451 01:55:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:40.451 01:55:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:40.451 01:55:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:40.451 01:55:40 -- common/autotest_common.sh@887 -- # local i 00:25:40.451 01:55:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:40.451 01:55:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:40.451 01:55:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.709 01:55:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:40.969 [ 00:25:40.969 { 00:25:40.969 "name": "BaseBdev2", 00:25:40.969 "aliases": [ 00:25:40.969 "2a8277d2-34e8-4212-9039-62e696555779" 00:25:40.969 ], 00:25:40.969 "product_name": "Malloc disk", 00:25:40.969 "block_size": 512, 00:25:40.969 "num_blocks": 65536, 00:25:40.969 "uuid": "2a8277d2-34e8-4212-9039-62e696555779", 00:25:40.969 "assigned_rate_limits": { 00:25:40.969 "rw_ios_per_sec": 0, 00:25:40.969 "rw_mbytes_per_sec": 0, 00:25:40.969 "r_mbytes_per_sec": 0, 00:25:40.969 "w_mbytes_per_sec": 0 00:25:40.969 }, 00:25:40.969 "claimed": true, 00:25:40.969 "claim_type": "exclusive_write", 00:25:40.969 "zoned": false, 00:25:40.969 "supported_io_types": { 00:25:40.969 "read": true, 00:25:40.969 "write": true, 00:25:40.969 "unmap": true, 00:25:40.969 "write_zeroes": true, 00:25:40.969 "flush": true, 00:25:40.969 "reset": true, 00:25:40.969 "compare": false, 00:25:40.969 "compare_and_write": false, 00:25:40.969 "abort": true, 00:25:40.969 "nvme_admin": false, 00:25:40.969 "nvme_io": false 00:25:40.969 }, 00:25:40.969 "memory_domains": [ 00:25:40.969 { 00:25:40.969 "dma_device_id": "system", 00:25:40.969 "dma_device_type": 1 00:25:40.969 }, 00:25:40.969 { 00:25:40.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.969 "dma_device_type": 2 00:25:40.969 } 00:25:40.969 ], 00:25:40.969 "driver_specific": {} 00:25:40.969 } 00:25:40.969 ] 00:25:40.969 01:55:40 -- common/autotest_common.sh@893 -- # return 0 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.969 01:55:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.228 01:55:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:41.228 "name": "Existed_Raid", 00:25:41.228 "uuid": "c945e4a8-7ff8-42e0-9f63-27e19d5ae4f0", 00:25:41.228 "strip_size_kb": 0, 00:25:41.228 "state": "online", 00:25:41.228 "raid_level": "raid1", 00:25:41.228 "superblock": false, 00:25:41.228 "num_base_bdevs": 2, 00:25:41.228 "num_base_bdevs_discovered": 2, 00:25:41.228 "num_base_bdevs_operational": 2, 00:25:41.228 "base_bdevs_list": [ 00:25:41.228 { 00:25:41.228 "name": "BaseBdev1", 00:25:41.228 "uuid": "caa1061c-b214-4e67-a83d-09e5df0c9b38", 00:25:41.228 "is_configured": true, 00:25:41.228 "data_offset": 0, 00:25:41.228 "data_size": 65536 00:25:41.228 }, 00:25:41.228 { 00:25:41.228 "name": "BaseBdev2", 00:25:41.228 "uuid": "2a8277d2-34e8-4212-9039-62e696555779", 00:25:41.228 "is_configured": true, 00:25:41.228 "data_offset": 0, 00:25:41.228 "data_size": 65536 00:25:41.228 } 00:25:41.228 ] 00:25:41.228 }' 00:25:41.228 01:55:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:41.228 01:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:41.795 01:55:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:42.094 [2024-04-24 01:55:42.056597] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:42.374 "name": "Existed_Raid", 00:25:42.374 "uuid": "c945e4a8-7ff8-42e0-9f63-27e19d5ae4f0", 00:25:42.374 "strip_size_kb": 0, 00:25:42.374 "state": "online", 00:25:42.374 "raid_level": "raid1", 00:25:42.374 "superblock": false, 00:25:42.374 "num_base_bdevs": 2, 00:25:42.374 "num_base_bdevs_discovered": 1, 00:25:42.374 "num_base_bdevs_operational": 1, 00:25:42.374 "base_bdevs_list": [ 00:25:42.374 { 00:25:42.374 "name": null, 00:25:42.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.374 "is_configured": false, 00:25:42.374 "data_offset": 0, 00:25:42.374 "data_size": 65536 00:25:42.374 }, 00:25:42.374 { 00:25:42.374 "name": "BaseBdev2", 00:25:42.374 "uuid": "2a8277d2-34e8-4212-9039-62e696555779", 00:25:42.374 "is_configured": true, 00:25:42.374 "data_offset": 0, 00:25:42.374 "data_size": 65536 00:25:42.374 } 00:25:42.374 ] 00:25:42.374 }' 00:25:42.374 01:55:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:42.374 01:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:42.939 01:55:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:42.939 01:55:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:42.939 01:55:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.939 01:55:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:43.198 01:55:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:43.198 01:55:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.198 01:55:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:43.456 [2024-04-24 01:55:43.400216] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:43.456 [2024-04-24 01:55:43.400315] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.456 [2024-04-24 01:55:43.518421] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.456 [2024-04-24 01:55:43.518569] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.456 [2024-04-24 01:55:43.518582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:25:43.715 01:55:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:43.715 01:55:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:43.715 01:55:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:43.715 01:55:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.974 01:55:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:43.974 01:55:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:43.974 01:55:43 -- bdev/bdev_raid.sh@287 -- # killprocess 122241 00:25:43.974 01:55:43 -- common/autotest_common.sh@936 -- # '[' -z 122241 ']' 00:25:43.974 01:55:43 -- common/autotest_common.sh@940 -- # kill -0 122241 00:25:43.974 01:55:43 -- common/autotest_common.sh@941 -- # uname 00:25:43.974 01:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.974 01:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122241 00:25:43.974 01:55:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.974 01:55:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.974 01:55:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122241' 00:25:43.974 killing process with pid 122241 00:25:43.974 01:55:43 -- common/autotest_common.sh@955 -- # kill 122241 00:25:43.974 [2024-04-24 01:55:43.859348] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.974 [2024-04-24 01:55:43.859466] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:43.974 01:55:43 -- common/autotest_common.sh@960 -- # wait 122241 00:25:45.349 ************************************ 00:25:45.349 END TEST raid_state_function_test 00:25:45.349 ************************************ 00:25:45.349 01:55:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:45.349 00:25:45.349 real 0m10.800s 00:25:45.349 user 0m17.993s 00:25:45.349 sys 0m1.525s 00:25:45.349 01:55:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:45.349 01:55:45 -- common/autotest_common.sh@10 -- # set +x 00:25:45.349 01:55:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:25:45.349 01:55:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:45.349 01:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.349 01:55:45 -- common/autotest_common.sh@10 -- # set +x 00:25:45.608 ************************************ 00:25:45.608 START TEST raid_state_function_test_sb 00:25:45.608 ************************************ 00:25:45.608 01:55:45 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 true 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=122579 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122579' 00:25:45.608 Process raid pid: 122579 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:45.608 01:55:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122579 /var/tmp/spdk-raid.sock 00:25:45.608 01:55:45 -- common/autotest_common.sh@817 -- # '[' -z 122579 ']' 00:25:45.608 01:55:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:45.608 01:55:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.609 01:55:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:45.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:45.609 01:55:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.609 01:55:45 -- common/autotest_common.sh@10 -- # set +x 00:25:45.609 [2024-04-24 01:55:45.527723] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:45.609 [2024-04-24 01:55:45.527913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.867 [2024-04-24 01:55:45.709605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.125 [2024-04-24 01:55:46.001870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.385 [2024-04-24 01:55:46.249787] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:46.645 01:55:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.645 01:55:46 -- common/autotest_common.sh@850 -- # return 0 00:25:46.645 01:55:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:46.904 [2024-04-24 01:55:46.760335] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:46.904 [2024-04-24 01:55:46.760409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:46.904 [2024-04-24 01:55:46.760420] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:46.904 [2024-04-24 01:55:46.760437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.904 01:55:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.162 01:55:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.162 "name": "Existed_Raid", 00:25:47.162 "uuid": "8db2cc8a-c90d-4fb7-9571-4b77e408603d", 00:25:47.162 "strip_size_kb": 0, 00:25:47.162 "state": "configuring", 00:25:47.162 "raid_level": "raid1", 00:25:47.162 "superblock": true, 00:25:47.162 "num_base_bdevs": 2, 00:25:47.162 "num_base_bdevs_discovered": 0, 00:25:47.162 "num_base_bdevs_operational": 2, 00:25:47.162 "base_bdevs_list": [ 00:25:47.162 { 00:25:47.162 "name": "BaseBdev1", 00:25:47.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.162 "is_configured": false, 00:25:47.162 "data_offset": 0, 00:25:47.162 "data_size": 0 00:25:47.162 }, 00:25:47.162 { 00:25:47.162 "name": "BaseBdev2", 00:25:47.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.162 "is_configured": false, 00:25:47.162 "data_offset": 0, 00:25:47.162 "data_size": 0 00:25:47.162 } 00:25:47.162 ] 00:25:47.162 }' 00:25:47.162 01:55:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.162 01:55:47 -- common/autotest_common.sh@10 -- # set +x 00:25:47.729 01:55:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:47.989 [2024-04-24 01:55:47.816461] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:47.989 [2024-04-24 01:55:47.816511] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:25:47.989 01:55:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:48.248 [2024-04-24 01:55:48.088555] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:48.248 [2024-04-24 01:55:48.088649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:48.248 [2024-04-24 01:55:48.088660] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:48.248 [2024-04-24 01:55:48.088692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:48.248 01:55:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:48.248 [2024-04-24 01:55:48.326783] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:48.248 BaseBdev1 00:25:48.507 01:55:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:48.507 01:55:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:48.507 01:55:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:48.507 01:55:48 -- common/autotest_common.sh@887 -- # local i 00:25:48.507 01:55:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:48.507 01:55:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:48.507 01:55:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:48.507 01:55:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:48.767 [ 00:25:48.767 { 00:25:48.767 "name": "BaseBdev1", 00:25:48.767 "aliases": [ 00:25:48.767 "b9d2d035-94f2-478d-b89a-0c644678ff32" 00:25:48.767 ], 00:25:48.767 "product_name": "Malloc disk", 00:25:48.767 "block_size": 512, 00:25:48.767 "num_blocks": 65536, 00:25:48.767 "uuid": "b9d2d035-94f2-478d-b89a-0c644678ff32", 00:25:48.767 "assigned_rate_limits": { 00:25:48.767 "rw_ios_per_sec": 0, 00:25:48.767 "rw_mbytes_per_sec": 0, 00:25:48.767 "r_mbytes_per_sec": 0, 00:25:48.767 "w_mbytes_per_sec": 0 00:25:48.767 }, 00:25:48.767 "claimed": true, 00:25:48.767 "claim_type": "exclusive_write", 00:25:48.767 "zoned": false, 00:25:48.767 "supported_io_types": { 00:25:48.767 "read": true, 00:25:48.767 "write": true, 00:25:48.767 "unmap": true, 00:25:48.767 "write_zeroes": true, 00:25:48.767 "flush": true, 00:25:48.767 "reset": true, 00:25:48.767 "compare": false, 00:25:48.767 "compare_and_write": false, 00:25:48.767 "abort": true, 00:25:48.767 "nvme_admin": false, 00:25:48.767 "nvme_io": false 00:25:48.767 }, 00:25:48.767 "memory_domains": [ 00:25:48.767 { 00:25:48.767 "dma_device_id": "system", 00:25:48.767 "dma_device_type": 1 00:25:48.767 }, 00:25:48.767 { 00:25:48.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.767 "dma_device_type": 2 00:25:48.767 } 00:25:48.767 ], 00:25:48.767 "driver_specific": {} 00:25:48.767 } 00:25:48.767 ] 00:25:48.767 01:55:48 -- common/autotest_common.sh@893 -- # return 0 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.767 01:55:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.027 01:55:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:49.027 "name": "Existed_Raid", 00:25:49.027 "uuid": "f6d32528-9614-4e9a-b2a4-c5d8fe5620e2", 00:25:49.027 "strip_size_kb": 0, 00:25:49.027 "state": "configuring", 00:25:49.027 "raid_level": "raid1", 00:25:49.027 "superblock": true, 00:25:49.027 "num_base_bdevs": 2, 00:25:49.027 "num_base_bdevs_discovered": 1, 00:25:49.027 "num_base_bdevs_operational": 2, 00:25:49.027 "base_bdevs_list": [ 00:25:49.027 { 00:25:49.027 "name": "BaseBdev1", 00:25:49.027 "uuid": "b9d2d035-94f2-478d-b89a-0c644678ff32", 00:25:49.027 "is_configured": true, 00:25:49.027 "data_offset": 2048, 00:25:49.027 "data_size": 63488 00:25:49.027 }, 00:25:49.027 { 00:25:49.027 "name": "BaseBdev2", 00:25:49.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.027 "is_configured": false, 00:25:49.027 "data_offset": 0, 00:25:49.027 "data_size": 0 00:25:49.027 } 00:25:49.027 ] 00:25:49.027 }' 00:25:49.027 01:55:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:49.027 01:55:49 -- common/autotest_common.sh@10 -- # set +x 00:25:49.595 01:55:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:50.162 [2024-04-24 01:55:49.959198] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:50.162 [2024-04-24 01:55:49.959258] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:25:50.162 01:55:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:50.162 01:55:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:50.420 01:55:50 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:50.679 BaseBdev1 00:25:50.679 01:55:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:50.679 01:55:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:50.679 01:55:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:50.679 01:55:50 -- common/autotest_common.sh@887 -- # local i 00:25:50.679 01:55:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:50.679 01:55:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:50.679 01:55:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.966 01:55:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:51.225 [ 00:25:51.225 { 00:25:51.225 "name": "BaseBdev1", 00:25:51.225 "aliases": [ 00:25:51.225 "d68d8558-ea92-44b8-ad32-21f2e5328e34" 00:25:51.225 ], 00:25:51.225 "product_name": "Malloc disk", 00:25:51.225 "block_size": 512, 00:25:51.225 "num_blocks": 65536, 00:25:51.225 "uuid": "d68d8558-ea92-44b8-ad32-21f2e5328e34", 00:25:51.225 "assigned_rate_limits": { 00:25:51.225 "rw_ios_per_sec": 0, 00:25:51.225 "rw_mbytes_per_sec": 0, 00:25:51.225 "r_mbytes_per_sec": 0, 00:25:51.225 "w_mbytes_per_sec": 0 00:25:51.225 }, 00:25:51.225 "claimed": false, 00:25:51.225 "zoned": false, 00:25:51.225 "supported_io_types": { 00:25:51.225 "read": true, 00:25:51.225 "write": true, 00:25:51.225 "unmap": true, 00:25:51.225 "write_zeroes": true, 00:25:51.225 "flush": true, 00:25:51.225 "reset": true, 00:25:51.225 "compare": false, 00:25:51.225 "compare_and_write": false, 00:25:51.225 "abort": true, 00:25:51.225 "nvme_admin": false, 00:25:51.225 "nvme_io": false 00:25:51.225 }, 00:25:51.225 "memory_domains": [ 00:25:51.225 { 00:25:51.225 "dma_device_id": "system", 00:25:51.225 "dma_device_type": 1 00:25:51.225 }, 00:25:51.225 { 00:25:51.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.225 "dma_device_type": 2 00:25:51.225 } 00:25:51.225 ], 00:25:51.225 "driver_specific": {} 00:25:51.225 } 00:25:51.225 ] 00:25:51.225 01:55:51 -- common/autotest_common.sh@893 -- # return 0 00:25:51.225 01:55:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:51.485 [2024-04-24 01:55:51.427376] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.485 [2024-04-24 01:55:51.429491] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.485 [2024-04-24 01:55:51.429550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.485 01:55:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.744 01:55:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.744 "name": "Existed_Raid", 00:25:51.744 "uuid": "5d4d007d-a64a-4d0b-a21e-3fb786fcb7c8", 00:25:51.744 "strip_size_kb": 0, 00:25:51.744 "state": "configuring", 00:25:51.744 "raid_level": "raid1", 00:25:51.744 "superblock": true, 00:25:51.744 "num_base_bdevs": 2, 00:25:51.744 "num_base_bdevs_discovered": 1, 00:25:51.744 "num_base_bdevs_operational": 2, 00:25:51.744 "base_bdevs_list": [ 00:25:51.744 { 00:25:51.744 "name": "BaseBdev1", 00:25:51.744 "uuid": "d68d8558-ea92-44b8-ad32-21f2e5328e34", 00:25:51.744 "is_configured": true, 00:25:51.744 "data_offset": 2048, 00:25:51.744 "data_size": 63488 00:25:51.744 }, 00:25:51.744 { 00:25:51.744 "name": "BaseBdev2", 00:25:51.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.744 "is_configured": false, 00:25:51.744 "data_offset": 0, 00:25:51.744 "data_size": 0 00:25:51.744 } 00:25:51.744 ] 00:25:51.744 }' 00:25:51.744 01:55:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.744 01:55:51 -- common/autotest_common.sh@10 -- # set +x 00:25:52.311 01:55:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:52.571 [2024-04-24 01:55:52.596525] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.571 [2024-04-24 01:55:52.596760] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:52.571 [2024-04-24 01:55:52.596773] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:52.571 [2024-04-24 01:55:52.596936] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:52.571 [2024-04-24 01:55:52.597262] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:52.571 [2024-04-24 01:55:52.597280] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:25:52.571 [2024-04-24 01:55:52.597443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.571 BaseBdev2 00:25:52.571 01:55:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:52.571 01:55:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:52.571 01:55:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:52.571 01:55:52 -- common/autotest_common.sh@887 -- # local i 00:25:52.571 01:55:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:52.571 01:55:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:52.571 01:55:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:52.830 01:55:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:53.090 [ 00:25:53.090 { 00:25:53.090 "name": "BaseBdev2", 00:25:53.090 "aliases": [ 00:25:53.090 "3b67326b-15b7-43a6-903c-1e6f3a9ccd0a" 00:25:53.090 ], 00:25:53.090 "product_name": "Malloc disk", 00:25:53.090 "block_size": 512, 00:25:53.090 "num_blocks": 65536, 00:25:53.090 "uuid": "3b67326b-15b7-43a6-903c-1e6f3a9ccd0a", 00:25:53.090 "assigned_rate_limits": { 00:25:53.090 "rw_ios_per_sec": 0, 00:25:53.090 "rw_mbytes_per_sec": 0, 00:25:53.090 "r_mbytes_per_sec": 0, 00:25:53.090 "w_mbytes_per_sec": 0 00:25:53.090 }, 00:25:53.090 "claimed": true, 00:25:53.090 "claim_type": "exclusive_write", 00:25:53.090 "zoned": false, 00:25:53.090 "supported_io_types": { 00:25:53.090 "read": true, 00:25:53.090 "write": true, 00:25:53.090 "unmap": true, 00:25:53.090 "write_zeroes": true, 00:25:53.090 "flush": true, 00:25:53.090 "reset": true, 00:25:53.090 "compare": false, 00:25:53.090 "compare_and_write": false, 00:25:53.090 "abort": true, 00:25:53.090 "nvme_admin": false, 00:25:53.090 "nvme_io": false 00:25:53.090 }, 00:25:53.090 "memory_domains": [ 00:25:53.090 { 00:25:53.090 "dma_device_id": "system", 00:25:53.090 "dma_device_type": 1 00:25:53.090 }, 00:25:53.090 { 00:25:53.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.090 "dma_device_type": 2 00:25:53.090 } 00:25:53.090 ], 00:25:53.090 "driver_specific": {} 00:25:53.090 } 00:25:53.090 ] 00:25:53.090 01:55:53 -- common/autotest_common.sh@893 -- # return 0 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:53.090 01:55:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:53.091 01:55:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:53.091 01:55:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:53.091 01:55:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.091 01:55:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.350 01:55:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.350 "name": "Existed_Raid", 00:25:53.350 "uuid": "5d4d007d-a64a-4d0b-a21e-3fb786fcb7c8", 00:25:53.350 "strip_size_kb": 0, 00:25:53.350 "state": "online", 00:25:53.350 "raid_level": "raid1", 00:25:53.350 "superblock": true, 00:25:53.350 "num_base_bdevs": 2, 00:25:53.350 "num_base_bdevs_discovered": 2, 00:25:53.350 "num_base_bdevs_operational": 2, 00:25:53.350 "base_bdevs_list": [ 00:25:53.351 { 00:25:53.351 "name": "BaseBdev1", 00:25:53.351 "uuid": "d68d8558-ea92-44b8-ad32-21f2e5328e34", 00:25:53.351 "is_configured": true, 00:25:53.351 "data_offset": 2048, 00:25:53.351 "data_size": 63488 00:25:53.351 }, 00:25:53.351 { 00:25:53.351 "name": "BaseBdev2", 00:25:53.351 "uuid": "3b67326b-15b7-43a6-903c-1e6f3a9ccd0a", 00:25:53.351 "is_configured": true, 00:25:53.351 "data_offset": 2048, 00:25:53.351 "data_size": 63488 00:25:53.351 } 00:25:53.351 ] 00:25:53.351 }' 00:25:53.351 01:55:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.351 01:55:53 -- common/autotest_common.sh@10 -- # set +x 00:25:53.918 01:55:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:53.918 [2024-04-24 01:55:53.984887] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.177 01:55:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.436 01:55:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:54.437 "name": "Existed_Raid", 00:25:54.437 "uuid": "5d4d007d-a64a-4d0b-a21e-3fb786fcb7c8", 00:25:54.437 "strip_size_kb": 0, 00:25:54.437 "state": "online", 00:25:54.437 "raid_level": "raid1", 00:25:54.437 "superblock": true, 00:25:54.437 "num_base_bdevs": 2, 00:25:54.437 "num_base_bdevs_discovered": 1, 00:25:54.437 "num_base_bdevs_operational": 1, 00:25:54.437 "base_bdevs_list": [ 00:25:54.437 { 00:25:54.437 "name": null, 00:25:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.437 "is_configured": false, 00:25:54.437 "data_offset": 2048, 00:25:54.437 "data_size": 63488 00:25:54.437 }, 00:25:54.437 { 00:25:54.437 "name": "BaseBdev2", 00:25:54.437 "uuid": "3b67326b-15b7-43a6-903c-1e6f3a9ccd0a", 00:25:54.437 "is_configured": true, 00:25:54.437 "data_offset": 2048, 00:25:54.437 "data_size": 63488 00:25:54.437 } 00:25:54.437 ] 00:25:54.437 }' 00:25:54.437 01:55:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:54.437 01:55:54 -- common/autotest_common.sh@10 -- # set +x 00:25:55.005 01:55:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:55.005 01:55:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:55.005 01:55:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.005 01:55:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:55.265 01:55:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:55.265 01:55:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:55.265 01:55:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:55.265 [2024-04-24 01:55:55.337028] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:55.265 [2024-04-24 01:55:55.337130] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:55.526 [2024-04-24 01:55:55.436914] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.526 [2024-04-24 01:55:55.437042] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.526 [2024-04-24 01:55:55.437054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:25:55.526 01:55:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:55.526 01:55:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:55.526 01:55:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.526 01:55:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:55.785 01:55:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:55.785 01:55:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:55.785 01:55:55 -- bdev/bdev_raid.sh@287 -- # killprocess 122579 00:25:55.785 01:55:55 -- common/autotest_common.sh@936 -- # '[' -z 122579 ']' 00:25:55.785 01:55:55 -- common/autotest_common.sh@940 -- # kill -0 122579 00:25:55.785 01:55:55 -- common/autotest_common.sh@941 -- # uname 00:25:55.785 01:55:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:55.785 01:55:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122579 00:25:55.785 01:55:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:55.785 01:55:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:55.785 killing process with pid 122579 00:25:55.785 01:55:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122579' 00:25:55.785 01:55:55 -- common/autotest_common.sh@955 -- # kill 122579 00:25:55.785 [2024-04-24 01:55:55.749420] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.785 [2024-04-24 01:55:55.749563] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:55.785 01:55:55 -- common/autotest_common.sh@960 -- # wait 122579 00:25:57.255 ************************************ 00:25:57.255 END TEST raid_state_function_test_sb 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:57.255 00:25:57.255 real 0m11.625s 00:25:57.255 user 0m19.466s 00:25:57.255 sys 0m1.740s 00:25:57.255 01:55:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:57.255 01:55:57 -- common/autotest_common.sh@10 -- # set +x 00:25:57.255 ************************************ 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:25:57.255 01:55:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:57.255 01:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:57.255 01:55:57 -- common/autotest_common.sh@10 -- # set +x 00:25:57.255 ************************************ 00:25:57.255 START TEST raid_superblock_test 00:25:57.255 ************************************ 00:25:57.255 01:55:57 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 2 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=122918 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:57.255 01:55:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122918 /var/tmp/spdk-raid.sock 00:25:57.255 01:55:57 -- common/autotest_common.sh@817 -- # '[' -z 122918 ']' 00:25:57.255 01:55:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:57.255 01:55:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:57.255 01:55:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:57.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:57.255 01:55:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:57.255 01:55:57 -- common/autotest_common.sh@10 -- # set +x 00:25:57.255 [2024-04-24 01:55:57.256103] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:25:57.255 [2024-04-24 01:55:57.256291] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122918 ] 00:25:57.514 [2024-04-24 01:55:57.438093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.772 [2024-04-24 01:55:57.714393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.031 [2024-04-24 01:55:57.946397] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.291 01:55:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:58.291 01:55:58 -- common/autotest_common.sh@850 -- # return 0 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.291 01:55:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:58.551 malloc1 00:25:58.551 01:55:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:58.812 [2024-04-24 01:55:58.758184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:58.812 [2024-04-24 01:55:58.758281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.812 [2024-04-24 01:55:58.758315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:58.812 [2024-04-24 01:55:58.758368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.812 [2024-04-24 01:55:58.760813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.812 [2024-04-24 01:55:58.760865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:58.812 pt1 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.812 01:55:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:59.071 malloc2 00:25:59.071 01:55:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:59.330 [2024-04-24 01:55:59.230829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:59.330 [2024-04-24 01:55:59.230922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.330 [2024-04-24 01:55:59.230965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:59.330 [2024-04-24 01:55:59.231034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.330 [2024-04-24 01:55:59.233554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.330 [2024-04-24 01:55:59.233613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:59.330 pt2 00:25:59.330 01:55:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:59.330 01:55:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:59.330 01:55:59 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:59.588 [2024-04-24 01:55:59.474922] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:59.588 [2024-04-24 01:55:59.476994] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:59.588 [2024-04-24 01:55:59.477212] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:59.588 [2024-04-24 01:55:59.477225] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:59.588 [2024-04-24 01:55:59.477369] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:59.588 [2024-04-24 01:55:59.477722] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:59.588 [2024-04-24 01:55:59.477744] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:59.588 [2024-04-24 01:55:59.477897] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.588 01:55:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.848 01:55:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.848 "name": "raid_bdev1", 00:25:59.848 "uuid": "bd95fc77-2e71-493e-8a23-d4a994c13adb", 00:25:59.848 "strip_size_kb": 0, 00:25:59.848 "state": "online", 00:25:59.848 "raid_level": "raid1", 00:25:59.848 "superblock": true, 00:25:59.848 "num_base_bdevs": 2, 00:25:59.848 "num_base_bdevs_discovered": 2, 00:25:59.848 "num_base_bdevs_operational": 2, 00:25:59.848 "base_bdevs_list": [ 00:25:59.848 { 00:25:59.848 "name": "pt1", 00:25:59.848 "uuid": "d686b40d-c1b6-5089-9228-1ba617c7f6e6", 00:25:59.848 "is_configured": true, 00:25:59.848 "data_offset": 2048, 00:25:59.848 "data_size": 63488 00:25:59.848 }, 00:25:59.848 { 00:25:59.848 "name": "pt2", 00:25:59.848 "uuid": "09885e85-0b5f-54a4-beed-81989a8c7e8e", 00:25:59.848 "is_configured": true, 00:25:59.848 "data_offset": 2048, 00:25:59.848 "data_size": 63488 00:25:59.848 } 00:25:59.848 ] 00:25:59.848 }' 00:25:59.848 01:55:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.848 01:55:59 -- common/autotest_common.sh@10 -- # set +x 00:26:00.416 01:56:00 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:00.416 01:56:00 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:00.675 [2024-04-24 01:56:00.519274] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.675 01:56:00 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bd95fc77-2e71-493e-8a23-d4a994c13adb 00:26:00.675 01:56:00 -- bdev/bdev_raid.sh@380 -- # '[' -z bd95fc77-2e71-493e-8a23-d4a994c13adb ']' 00:26:00.675 01:56:00 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:00.933 [2024-04-24 01:56:00.795138] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:00.933 [2024-04-24 01:56:00.795177] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:00.933 [2024-04-24 01:56:00.795260] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:00.933 [2024-04-24 01:56:00.795326] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:00.933 [2024-04-24 01:56:00.795337] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:26:00.933 01:56:00 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.933 01:56:00 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:00.933 01:56:01 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:00.933 01:56:01 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:00.933 01:56:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:00.933 01:56:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:01.499 01:56:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:01.499 01:56:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:01.499 01:56:01 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:01.499 01:56:01 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:01.758 01:56:01 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:01.758 01:56:01 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:01.758 01:56:01 -- common/autotest_common.sh@638 -- # local es=0 00:26:01.758 01:56:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:01.758 01:56:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.758 01:56:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.758 01:56:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.758 01:56:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.758 01:56:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.758 01:56:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.758 01:56:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.758 01:56:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:01.758 01:56:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:02.016 [2024-04-24 01:56:01.975461] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:02.016 [2024-04-24 01:56:01.977752] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:02.016 [2024-04-24 01:56:01.977841] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:02.016 [2024-04-24 01:56:01.977920] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:02.016 [2024-04-24 01:56:01.977964] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:02.016 [2024-04-24 01:56:01.977992] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:26:02.016 request: 00:26:02.016 { 00:26:02.016 "name": "raid_bdev1", 00:26:02.016 "raid_level": "raid1", 00:26:02.016 "base_bdevs": [ 00:26:02.016 "malloc1", 00:26:02.016 "malloc2" 00:26:02.016 ], 00:26:02.016 "superblock": false, 00:26:02.016 "method": "bdev_raid_create", 00:26:02.016 "req_id": 1 00:26:02.016 } 00:26:02.016 Got JSON-RPC error response 00:26:02.016 response: 00:26:02.016 { 00:26:02.016 "code": -17, 00:26:02.016 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:02.016 } 00:26:02.016 01:56:01 -- common/autotest_common.sh@641 -- # es=1 00:26:02.016 01:56:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:02.016 01:56:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:02.016 01:56:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:02.016 01:56:01 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.016 01:56:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:02.275 01:56:02 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:02.275 01:56:02 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:02.275 01:56:02 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:02.533 [2024-04-24 01:56:02.391493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:02.533 [2024-04-24 01:56:02.391620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.533 [2024-04-24 01:56:02.391661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:02.533 [2024-04-24 01:56:02.391689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.533 [2024-04-24 01:56:02.394296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.533 [2024-04-24 01:56:02.394365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:02.533 [2024-04-24 01:56:02.394490] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:02.533 [2024-04-24 01:56:02.394544] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:02.533 pt1 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.533 01:56:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.791 01:56:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:02.791 "name": "raid_bdev1", 00:26:02.791 "uuid": "bd95fc77-2e71-493e-8a23-d4a994c13adb", 00:26:02.791 "strip_size_kb": 0, 00:26:02.791 "state": "configuring", 00:26:02.791 "raid_level": "raid1", 00:26:02.791 "superblock": true, 00:26:02.791 "num_base_bdevs": 2, 00:26:02.791 "num_base_bdevs_discovered": 1, 00:26:02.791 "num_base_bdevs_operational": 2, 00:26:02.792 "base_bdevs_list": [ 00:26:02.792 { 00:26:02.792 "name": "pt1", 00:26:02.792 "uuid": "d686b40d-c1b6-5089-9228-1ba617c7f6e6", 00:26:02.792 "is_configured": true, 00:26:02.792 "data_offset": 2048, 00:26:02.792 "data_size": 63488 00:26:02.792 }, 00:26:02.792 { 00:26:02.792 "name": null, 00:26:02.792 "uuid": "09885e85-0b5f-54a4-beed-81989a8c7e8e", 00:26:02.792 "is_configured": false, 00:26:02.792 "data_offset": 2048, 00:26:02.792 "data_size": 63488 00:26:02.792 } 00:26:02.792 ] 00:26:02.792 }' 00:26:02.792 01:56:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:02.792 01:56:02 -- common/autotest_common.sh@10 -- # set +x 00:26:03.357 01:56:03 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:03.358 [2024-04-24 01:56:03.367689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:03.358 [2024-04-24 01:56:03.367808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.358 [2024-04-24 01:56:03.367843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:03.358 [2024-04-24 01:56:03.367869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.358 [2024-04-24 01:56:03.368375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.358 [2024-04-24 01:56:03.368419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:03.358 [2024-04-24 01:56:03.368531] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:03.358 [2024-04-24 01:56:03.368552] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:03.358 [2024-04-24 01:56:03.368659] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:03.358 [2024-04-24 01:56:03.368676] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:03.358 [2024-04-24 01:56:03.368792] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:03.358 [2024-04-24 01:56:03.369095] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:03.358 [2024-04-24 01:56:03.369114] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:26:03.358 [2024-04-24 01:56:03.369253] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.358 pt2 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.358 01:56:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.616 01:56:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.616 "name": "raid_bdev1", 00:26:03.616 "uuid": "bd95fc77-2e71-493e-8a23-d4a994c13adb", 00:26:03.616 "strip_size_kb": 0, 00:26:03.616 "state": "online", 00:26:03.616 "raid_level": "raid1", 00:26:03.616 "superblock": true, 00:26:03.616 "num_base_bdevs": 2, 00:26:03.616 "num_base_bdevs_discovered": 2, 00:26:03.616 "num_base_bdevs_operational": 2, 00:26:03.616 "base_bdevs_list": [ 00:26:03.616 { 00:26:03.616 "name": "pt1", 00:26:03.616 "uuid": "d686b40d-c1b6-5089-9228-1ba617c7f6e6", 00:26:03.616 "is_configured": true, 00:26:03.616 "data_offset": 2048, 00:26:03.616 "data_size": 63488 00:26:03.616 }, 00:26:03.616 { 00:26:03.616 "name": "pt2", 00:26:03.616 "uuid": "09885e85-0b5f-54a4-beed-81989a8c7e8e", 00:26:03.616 "is_configured": true, 00:26:03.616 "data_offset": 2048, 00:26:03.616 "data_size": 63488 00:26:03.616 } 00:26:03.616 ] 00:26:03.616 }' 00:26:03.616 01:56:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.616 01:56:03 -- common/autotest_common.sh@10 -- # set +x 00:26:04.183 01:56:04 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:04.183 01:56:04 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:04.442 [2024-04-24 01:56:04.448075] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.442 01:56:04 -- bdev/bdev_raid.sh@430 -- # '[' bd95fc77-2e71-493e-8a23-d4a994c13adb '!=' bd95fc77-2e71-493e-8a23-d4a994c13adb ']' 00:26:04.442 01:56:04 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:26:04.442 01:56:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:04.442 01:56:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:04.442 01:56:04 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:04.700 [2024-04-24 01:56:04.671994] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.700 01:56:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.958 01:56:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.958 "name": "raid_bdev1", 00:26:04.958 "uuid": "bd95fc77-2e71-493e-8a23-d4a994c13adb", 00:26:04.958 "strip_size_kb": 0, 00:26:04.958 "state": "online", 00:26:04.958 "raid_level": "raid1", 00:26:04.958 "superblock": true, 00:26:04.958 "num_base_bdevs": 2, 00:26:04.958 "num_base_bdevs_discovered": 1, 00:26:04.958 "num_base_bdevs_operational": 1, 00:26:04.958 "base_bdevs_list": [ 00:26:04.958 { 00:26:04.958 "name": null, 00:26:04.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.958 "is_configured": false, 00:26:04.958 "data_offset": 2048, 00:26:04.958 "data_size": 63488 00:26:04.958 }, 00:26:04.958 { 00:26:04.958 "name": "pt2", 00:26:04.958 "uuid": "09885e85-0b5f-54a4-beed-81989a8c7e8e", 00:26:04.958 "is_configured": true, 00:26:04.958 "data_offset": 2048, 00:26:04.958 "data_size": 63488 00:26:04.958 } 00:26:04.958 ] 00:26:04.958 }' 00:26:04.958 01:56:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.958 01:56:04 -- common/autotest_common.sh@10 -- # set +x 00:26:05.525 01:56:05 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:05.784 [2024-04-24 01:56:05.760170] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:05.784 [2024-04-24 01:56:05.760212] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.784 [2024-04-24 01:56:05.760286] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.784 [2024-04-24 01:56:05.760334] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.784 [2024-04-24 01:56:05.760343] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:26:05.784 01:56:05 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.784 01:56:05 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:06.043 01:56:06 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:06.043 01:56:06 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:06.044 01:56:06 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:06.044 01:56:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:06.044 01:56:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@462 -- # i=1 00:26:06.302 01:56:06 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:06.561 [2024-04-24 01:56:06.464263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:06.561 [2024-04-24 01:56:06.464364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.561 [2024-04-24 01:56:06.464412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:06.561 [2024-04-24 01:56:06.464446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.561 [2024-04-24 01:56:06.467032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.561 [2024-04-24 01:56:06.467112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:06.561 [2024-04-24 01:56:06.467229] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:06.561 [2024-04-24 01:56:06.467275] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:06.561 [2024-04-24 01:56:06.467371] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:26:06.561 [2024-04-24 01:56:06.467379] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:06.561 [2024-04-24 01:56:06.467497] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:06.561 [2024-04-24 01:56:06.467791] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:26:06.561 [2024-04-24 01:56:06.467813] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:26:06.561 [2024-04-24 01:56:06.467951] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.561 pt2 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.561 01:56:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.820 01:56:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:06.820 "name": "raid_bdev1", 00:26:06.820 "uuid": "bd95fc77-2e71-493e-8a23-d4a994c13adb", 00:26:06.820 "strip_size_kb": 0, 00:26:06.820 "state": "online", 00:26:06.820 "raid_level": "raid1", 00:26:06.820 "superblock": true, 00:26:06.820 "num_base_bdevs": 2, 00:26:06.820 "num_base_bdevs_discovered": 1, 00:26:06.820 "num_base_bdevs_operational": 1, 00:26:06.820 "base_bdevs_list": [ 00:26:06.820 { 00:26:06.820 "name": null, 00:26:06.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.820 "is_configured": false, 00:26:06.820 "data_offset": 2048, 00:26:06.820 "data_size": 63488 00:26:06.820 }, 00:26:06.820 { 00:26:06.820 "name": "pt2", 00:26:06.820 "uuid": "09885e85-0b5f-54a4-beed-81989a8c7e8e", 00:26:06.820 "is_configured": true, 00:26:06.820 "data_offset": 2048, 00:26:06.820 "data_size": 63488 00:26:06.820 } 00:26:06.820 ] 00:26:06.820 }' 00:26:06.820 01:56:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:06.820 01:56:06 -- common/autotest_common.sh@10 -- # set +x 00:26:07.388 01:56:07 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:26:07.388 01:56:07 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:07.388 01:56:07 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:07.647 [2024-04-24 01:56:07.632850] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:07.647 01:56:07 -- bdev/bdev_raid.sh@506 -- # '[' bd95fc77-2e71-493e-8a23-d4a994c13adb '!=' bd95fc77-2e71-493e-8a23-d4a994c13adb ']' 00:26:07.647 01:56:07 -- bdev/bdev_raid.sh@511 -- # killprocess 122918 00:26:07.647 01:56:07 -- common/autotest_common.sh@936 -- # '[' -z 122918 ']' 00:26:07.647 01:56:07 -- common/autotest_common.sh@940 -- # kill -0 122918 00:26:07.647 01:56:07 -- common/autotest_common.sh@941 -- # uname 00:26:07.647 01:56:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:07.647 01:56:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122918 00:26:07.647 01:56:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:07.647 01:56:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:07.647 killing process with pid 122918 00:26:07.647 01:56:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122918' 00:26:07.647 01:56:07 -- common/autotest_common.sh@955 -- # kill 122918 00:26:07.647 [2024-04-24 01:56:07.690502] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:07.647 [2024-04-24 01:56:07.690580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.647 [2024-04-24 01:56:07.690632] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.647 [2024-04-24 01:56:07.690641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:26:07.647 01:56:07 -- common/autotest_common.sh@960 -- # wait 122918 00:26:07.906 [2024-04-24 01:56:07.888458] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.283 ************************************ 00:26:09.283 END TEST raid_superblock_test 00:26:09.283 ************************************ 00:26:09.283 01:56:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:09.283 00:26:09.283 real 0m12.073s 00:26:09.283 user 0m20.499s 00:26:09.283 sys 0m1.939s 00:26:09.283 01:56:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:09.283 01:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:09.283 01:56:09 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:26:09.283 01:56:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:26:09.283 01:56:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:26:09.283 01:56:09 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:09.283 01:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.283 01:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:09.283 ************************************ 00:26:09.283 START TEST raid_state_function_test 00:26:09.283 ************************************ 00:26:09.284 01:56:09 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 false 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=123282 00:26:09.284 Process raid pid: 123282 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123282' 00:26:09.284 01:56:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123282 /var/tmp/spdk-raid.sock 00:26:09.284 01:56:09 -- common/autotest_common.sh@817 -- # '[' -z 123282 ']' 00:26:09.543 01:56:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.543 01:56:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:09.543 01:56:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:09.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.543 01:56:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.543 01:56:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:09.543 01:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:09.543 [2024-04-24 01:56:09.443580] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:26:09.543 [2024-04-24 01:56:09.443773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.543 [2024-04-24 01:56:09.623449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.817 [2024-04-24 01:56:09.839839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.082 [2024-04-24 01:56:10.084854] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.341 01:56:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:10.341 01:56:10 -- common/autotest_common.sh@850 -- # return 0 00:26:10.341 01:56:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:10.600 [2024-04-24 01:56:10.495460] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:10.600 [2024-04-24 01:56:10.495540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:10.600 [2024-04-24 01:56:10.495551] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:10.600 [2024-04-24 01:56:10.495585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:10.600 [2024-04-24 01:56:10.495592] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:10.600 [2024-04-24 01:56:10.495635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.600 01:56:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.860 01:56:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.860 "name": "Existed_Raid", 00:26:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.860 "strip_size_kb": 64, 00:26:10.860 "state": "configuring", 00:26:10.860 "raid_level": "raid0", 00:26:10.860 "superblock": false, 00:26:10.860 "num_base_bdevs": 3, 00:26:10.860 "num_base_bdevs_discovered": 0, 00:26:10.860 "num_base_bdevs_operational": 3, 00:26:10.860 "base_bdevs_list": [ 00:26:10.860 { 00:26:10.860 "name": "BaseBdev1", 00:26:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.860 "is_configured": false, 00:26:10.860 "data_offset": 0, 00:26:10.860 "data_size": 0 00:26:10.860 }, 00:26:10.860 { 00:26:10.860 "name": "BaseBdev2", 00:26:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.860 "is_configured": false, 00:26:10.860 "data_offset": 0, 00:26:10.860 "data_size": 0 00:26:10.860 }, 00:26:10.860 { 00:26:10.860 "name": "BaseBdev3", 00:26:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.860 "is_configured": false, 00:26:10.860 "data_offset": 0, 00:26:10.860 "data_size": 0 00:26:10.860 } 00:26:10.860 ] 00:26:10.860 }' 00:26:10.860 01:56:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.860 01:56:10 -- common/autotest_common.sh@10 -- # set +x 00:26:11.428 01:56:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:11.687 [2024-04-24 01:56:11.639566] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.687 [2024-04-24 01:56:11.639612] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:11.687 01:56:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:11.946 [2024-04-24 01:56:11.927620] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.946 [2024-04-24 01:56:11.927695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.946 [2024-04-24 01:56:11.927706] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.946 [2024-04-24 01:56:11.927723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.946 [2024-04-24 01:56:11.927730] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.946 [2024-04-24 01:56:11.927751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.946 01:56:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:12.205 [2024-04-24 01:56:12.171861] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.205 BaseBdev1 00:26:12.205 01:56:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:12.205 01:56:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:12.205 01:56:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:12.205 01:56:12 -- common/autotest_common.sh@887 -- # local i 00:26:12.205 01:56:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:12.205 01:56:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:12.205 01:56:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.464 01:56:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:12.723 [ 00:26:12.723 { 00:26:12.723 "name": "BaseBdev1", 00:26:12.723 "aliases": [ 00:26:12.723 "a9f0c118-4008-4488-881f-cba75e77bcf2" 00:26:12.723 ], 00:26:12.723 "product_name": "Malloc disk", 00:26:12.723 "block_size": 512, 00:26:12.723 "num_blocks": 65536, 00:26:12.723 "uuid": "a9f0c118-4008-4488-881f-cba75e77bcf2", 00:26:12.723 "assigned_rate_limits": { 00:26:12.723 "rw_ios_per_sec": 0, 00:26:12.723 "rw_mbytes_per_sec": 0, 00:26:12.723 "r_mbytes_per_sec": 0, 00:26:12.723 "w_mbytes_per_sec": 0 00:26:12.723 }, 00:26:12.723 "claimed": true, 00:26:12.723 "claim_type": "exclusive_write", 00:26:12.723 "zoned": false, 00:26:12.723 "supported_io_types": { 00:26:12.723 "read": true, 00:26:12.723 "write": true, 00:26:12.723 "unmap": true, 00:26:12.723 "write_zeroes": true, 00:26:12.723 "flush": true, 00:26:12.723 "reset": true, 00:26:12.723 "compare": false, 00:26:12.723 "compare_and_write": false, 00:26:12.723 "abort": true, 00:26:12.723 "nvme_admin": false, 00:26:12.723 "nvme_io": false 00:26:12.723 }, 00:26:12.723 "memory_domains": [ 00:26:12.723 { 00:26:12.723 "dma_device_id": "system", 00:26:12.723 "dma_device_type": 1 00:26:12.723 }, 00:26:12.723 { 00:26:12.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.723 "dma_device_type": 2 00:26:12.723 } 00:26:12.723 ], 00:26:12.723 "driver_specific": {} 00:26:12.723 } 00:26:12.723 ] 00:26:12.723 01:56:12 -- common/autotest_common.sh@893 -- # return 0 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.723 01:56:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.981 01:56:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:12.981 "name": "Existed_Raid", 00:26:12.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.981 "strip_size_kb": 64, 00:26:12.981 "state": "configuring", 00:26:12.981 "raid_level": "raid0", 00:26:12.981 "superblock": false, 00:26:12.981 "num_base_bdevs": 3, 00:26:12.981 "num_base_bdevs_discovered": 1, 00:26:12.981 "num_base_bdevs_operational": 3, 00:26:12.981 "base_bdevs_list": [ 00:26:12.981 { 00:26:12.981 "name": "BaseBdev1", 00:26:12.981 "uuid": "a9f0c118-4008-4488-881f-cba75e77bcf2", 00:26:12.981 "is_configured": true, 00:26:12.981 "data_offset": 0, 00:26:12.981 "data_size": 65536 00:26:12.981 }, 00:26:12.981 { 00:26:12.981 "name": "BaseBdev2", 00:26:12.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.981 "is_configured": false, 00:26:12.981 "data_offset": 0, 00:26:12.981 "data_size": 0 00:26:12.981 }, 00:26:12.981 { 00:26:12.981 "name": "BaseBdev3", 00:26:12.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.981 "is_configured": false, 00:26:12.981 "data_offset": 0, 00:26:12.981 "data_size": 0 00:26:12.981 } 00:26:12.981 ] 00:26:12.981 }' 00:26:12.981 01:56:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:12.981 01:56:12 -- common/autotest_common.sh@10 -- # set +x 00:26:13.546 01:56:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:13.804 [2024-04-24 01:56:13.788398] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.804 [2024-04-24 01:56:13.788482] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:13.804 01:56:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:26:13.804 01:56:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:14.063 [2024-04-24 01:56:14.060496] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.063 [2024-04-24 01:56:14.062558] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:14.063 [2024-04-24 01:56:14.062619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:14.063 [2024-04-24 01:56:14.062629] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:14.063 [2024-04-24 01:56:14.062654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.063 01:56:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.322 01:56:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:14.322 "name": "Existed_Raid", 00:26:14.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.322 "strip_size_kb": 64, 00:26:14.322 "state": "configuring", 00:26:14.322 "raid_level": "raid0", 00:26:14.322 "superblock": false, 00:26:14.322 "num_base_bdevs": 3, 00:26:14.322 "num_base_bdevs_discovered": 1, 00:26:14.322 "num_base_bdevs_operational": 3, 00:26:14.322 "base_bdevs_list": [ 00:26:14.322 { 00:26:14.322 "name": "BaseBdev1", 00:26:14.322 "uuid": "a9f0c118-4008-4488-881f-cba75e77bcf2", 00:26:14.322 "is_configured": true, 00:26:14.322 "data_offset": 0, 00:26:14.322 "data_size": 65536 00:26:14.322 }, 00:26:14.322 { 00:26:14.322 "name": "BaseBdev2", 00:26:14.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.322 "is_configured": false, 00:26:14.322 "data_offset": 0, 00:26:14.322 "data_size": 0 00:26:14.322 }, 00:26:14.322 { 00:26:14.322 "name": "BaseBdev3", 00:26:14.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.322 "is_configured": false, 00:26:14.322 "data_offset": 0, 00:26:14.322 "data_size": 0 00:26:14.322 } 00:26:14.322 ] 00:26:14.322 }' 00:26:14.322 01:56:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:14.322 01:56:14 -- common/autotest_common.sh@10 -- # set +x 00:26:14.889 01:56:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:15.457 [2024-04-24 01:56:15.240792] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:15.457 BaseBdev2 00:26:15.457 01:56:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:15.457 01:56:15 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:15.457 01:56:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:15.457 01:56:15 -- common/autotest_common.sh@887 -- # local i 00:26:15.457 01:56:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:15.457 01:56:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:15.457 01:56:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.457 01:56:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:15.716 [ 00:26:15.716 { 00:26:15.716 "name": "BaseBdev2", 00:26:15.716 "aliases": [ 00:26:15.717 "d6bf0007-187c-4110-b339-5b39c8018b9a" 00:26:15.717 ], 00:26:15.717 "product_name": "Malloc disk", 00:26:15.717 "block_size": 512, 00:26:15.717 "num_blocks": 65536, 00:26:15.717 "uuid": "d6bf0007-187c-4110-b339-5b39c8018b9a", 00:26:15.717 "assigned_rate_limits": { 00:26:15.717 "rw_ios_per_sec": 0, 00:26:15.717 "rw_mbytes_per_sec": 0, 00:26:15.717 "r_mbytes_per_sec": 0, 00:26:15.717 "w_mbytes_per_sec": 0 00:26:15.717 }, 00:26:15.717 "claimed": true, 00:26:15.717 "claim_type": "exclusive_write", 00:26:15.717 "zoned": false, 00:26:15.717 "supported_io_types": { 00:26:15.717 "read": true, 00:26:15.717 "write": true, 00:26:15.717 "unmap": true, 00:26:15.717 "write_zeroes": true, 00:26:15.717 "flush": true, 00:26:15.717 "reset": true, 00:26:15.717 "compare": false, 00:26:15.717 "compare_and_write": false, 00:26:15.717 "abort": true, 00:26:15.717 "nvme_admin": false, 00:26:15.717 "nvme_io": false 00:26:15.717 }, 00:26:15.717 "memory_domains": [ 00:26:15.717 { 00:26:15.717 "dma_device_id": "system", 00:26:15.717 "dma_device_type": 1 00:26:15.717 }, 00:26:15.717 { 00:26:15.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.717 "dma_device_type": 2 00:26:15.717 } 00:26:15.717 ], 00:26:15.717 "driver_specific": {} 00:26:15.717 } 00:26:15.717 ] 00:26:15.717 01:56:15 -- common/autotest_common.sh@893 -- # return 0 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.717 01:56:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.975 01:56:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.975 "name": "Existed_Raid", 00:26:15.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.975 "strip_size_kb": 64, 00:26:15.975 "state": "configuring", 00:26:15.975 "raid_level": "raid0", 00:26:15.975 "superblock": false, 00:26:15.975 "num_base_bdevs": 3, 00:26:15.975 "num_base_bdevs_discovered": 2, 00:26:15.975 "num_base_bdevs_operational": 3, 00:26:15.975 "base_bdevs_list": [ 00:26:15.975 { 00:26:15.975 "name": "BaseBdev1", 00:26:15.975 "uuid": "a9f0c118-4008-4488-881f-cba75e77bcf2", 00:26:15.975 "is_configured": true, 00:26:15.975 "data_offset": 0, 00:26:15.975 "data_size": 65536 00:26:15.975 }, 00:26:15.975 { 00:26:15.975 "name": "BaseBdev2", 00:26:15.975 "uuid": "d6bf0007-187c-4110-b339-5b39c8018b9a", 00:26:15.975 "is_configured": true, 00:26:15.975 "data_offset": 0, 00:26:15.975 "data_size": 65536 00:26:15.975 }, 00:26:15.975 { 00:26:15.975 "name": "BaseBdev3", 00:26:15.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.975 "is_configured": false, 00:26:15.975 "data_offset": 0, 00:26:15.975 "data_size": 0 00:26:15.975 } 00:26:15.975 ] 00:26:15.975 }' 00:26:15.975 01:56:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.975 01:56:16 -- common/autotest_common.sh@10 -- # set +x 00:26:16.910 01:56:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:16.910 [2024-04-24 01:56:16.980876] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.910 [2024-04-24 01:56:16.980930] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:16.910 [2024-04-24 01:56:16.980938] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:16.910 [2024-04-24 01:56:16.981102] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:26:16.910 [2024-04-24 01:56:16.981495] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:16.910 [2024-04-24 01:56:16.981518] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:16.910 [2024-04-24 01:56:16.981758] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.910 BaseBdev3 00:26:17.168 01:56:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:17.168 01:56:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:17.168 01:56:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:17.168 01:56:16 -- common/autotest_common.sh@887 -- # local i 00:26:17.168 01:56:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:17.168 01:56:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:17.168 01:56:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:17.426 01:56:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:17.426 [ 00:26:17.426 { 00:26:17.426 "name": "BaseBdev3", 00:26:17.426 "aliases": [ 00:26:17.426 "1d7e6d30-861e-4a9a-9685-fa37ab20939e" 00:26:17.426 ], 00:26:17.426 "product_name": "Malloc disk", 00:26:17.426 "block_size": 512, 00:26:17.426 "num_blocks": 65536, 00:26:17.426 "uuid": "1d7e6d30-861e-4a9a-9685-fa37ab20939e", 00:26:17.426 "assigned_rate_limits": { 00:26:17.426 "rw_ios_per_sec": 0, 00:26:17.426 "rw_mbytes_per_sec": 0, 00:26:17.426 "r_mbytes_per_sec": 0, 00:26:17.426 "w_mbytes_per_sec": 0 00:26:17.426 }, 00:26:17.426 "claimed": true, 00:26:17.426 "claim_type": "exclusive_write", 00:26:17.426 "zoned": false, 00:26:17.426 "supported_io_types": { 00:26:17.426 "read": true, 00:26:17.426 "write": true, 00:26:17.426 "unmap": true, 00:26:17.426 "write_zeroes": true, 00:26:17.426 "flush": true, 00:26:17.426 "reset": true, 00:26:17.426 "compare": false, 00:26:17.426 "compare_and_write": false, 00:26:17.426 "abort": true, 00:26:17.426 "nvme_admin": false, 00:26:17.426 "nvme_io": false 00:26:17.426 }, 00:26:17.426 "memory_domains": [ 00:26:17.426 { 00:26:17.426 "dma_device_id": "system", 00:26:17.426 "dma_device_type": 1 00:26:17.426 }, 00:26:17.426 { 00:26:17.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.426 "dma_device_type": 2 00:26:17.426 } 00:26:17.426 ], 00:26:17.426 "driver_specific": {} 00:26:17.426 } 00:26:17.426 ] 00:26:17.426 01:56:17 -- common/autotest_common.sh@893 -- # return 0 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.426 01:56:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.992 01:56:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:17.992 "name": "Existed_Raid", 00:26:17.992 "uuid": "a6934786-f357-4ac5-b404-7be6933c5b05", 00:26:17.992 "strip_size_kb": 64, 00:26:17.992 "state": "online", 00:26:17.992 "raid_level": "raid0", 00:26:17.992 "superblock": false, 00:26:17.992 "num_base_bdevs": 3, 00:26:17.992 "num_base_bdevs_discovered": 3, 00:26:17.992 "num_base_bdevs_operational": 3, 00:26:17.992 "base_bdevs_list": [ 00:26:17.992 { 00:26:17.992 "name": "BaseBdev1", 00:26:17.992 "uuid": "a9f0c118-4008-4488-881f-cba75e77bcf2", 00:26:17.992 "is_configured": true, 00:26:17.992 "data_offset": 0, 00:26:17.992 "data_size": 65536 00:26:17.992 }, 00:26:17.992 { 00:26:17.992 "name": "BaseBdev2", 00:26:17.992 "uuid": "d6bf0007-187c-4110-b339-5b39c8018b9a", 00:26:17.992 "is_configured": true, 00:26:17.992 "data_offset": 0, 00:26:17.992 "data_size": 65536 00:26:17.992 }, 00:26:17.992 { 00:26:17.992 "name": "BaseBdev3", 00:26:17.992 "uuid": "1d7e6d30-861e-4a9a-9685-fa37ab20939e", 00:26:17.992 "is_configured": true, 00:26:17.992 "data_offset": 0, 00:26:17.992 "data_size": 65536 00:26:17.992 } 00:26:17.992 ] 00:26:17.992 }' 00:26:17.992 01:56:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:17.992 01:56:17 -- common/autotest_common.sh@10 -- # set +x 00:26:18.559 01:56:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:18.818 [2024-04-24 01:56:18.677386] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:18.818 [2024-04-24 01:56:18.677430] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.818 [2024-04-24 01:56:18.677478] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.818 01:56:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.076 01:56:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:19.076 "name": "Existed_Raid", 00:26:19.076 "uuid": "a6934786-f357-4ac5-b404-7be6933c5b05", 00:26:19.076 "strip_size_kb": 64, 00:26:19.076 "state": "offline", 00:26:19.076 "raid_level": "raid0", 00:26:19.076 "superblock": false, 00:26:19.076 "num_base_bdevs": 3, 00:26:19.076 "num_base_bdevs_discovered": 2, 00:26:19.076 "num_base_bdevs_operational": 2, 00:26:19.076 "base_bdevs_list": [ 00:26:19.076 { 00:26:19.076 "name": null, 00:26:19.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.076 "is_configured": false, 00:26:19.076 "data_offset": 0, 00:26:19.076 "data_size": 65536 00:26:19.076 }, 00:26:19.076 { 00:26:19.076 "name": "BaseBdev2", 00:26:19.076 "uuid": "d6bf0007-187c-4110-b339-5b39c8018b9a", 00:26:19.076 "is_configured": true, 00:26:19.076 "data_offset": 0, 00:26:19.076 "data_size": 65536 00:26:19.076 }, 00:26:19.076 { 00:26:19.076 "name": "BaseBdev3", 00:26:19.076 "uuid": "1d7e6d30-861e-4a9a-9685-fa37ab20939e", 00:26:19.076 "is_configured": true, 00:26:19.076 "data_offset": 0, 00:26:19.076 "data_size": 65536 00:26:19.076 } 00:26:19.076 ] 00:26:19.076 }' 00:26:19.076 01:56:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:19.076 01:56:19 -- common/autotest_common.sh@10 -- # set +x 00:26:19.643 01:56:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:19.643 01:56:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:19.643 01:56:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:19.643 01:56:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.901 01:56:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:19.901 01:56:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:19.901 01:56:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:20.160 [2024-04-24 01:56:20.014439] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:20.160 01:56:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:20.160 01:56:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:20.160 01:56:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.160 01:56:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:20.419 01:56:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:20.419 01:56:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.419 01:56:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:20.677 [2024-04-24 01:56:20.508589] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:20.677 [2024-04-24 01:56:20.508654] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:20.677 01:56:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:20.677 01:56:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:20.677 01:56:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.677 01:56:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:20.999 01:56:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:20.999 01:56:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:20.999 01:56:20 -- bdev/bdev_raid.sh@287 -- # killprocess 123282 00:26:20.999 01:56:20 -- common/autotest_common.sh@936 -- # '[' -z 123282 ']' 00:26:20.999 01:56:20 -- common/autotest_common.sh@940 -- # kill -0 123282 00:26:20.999 01:56:20 -- common/autotest_common.sh@941 -- # uname 00:26:20.999 01:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:20.999 01:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123282 00:26:20.999 01:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:20.999 01:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:20.999 01:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123282' 00:26:20.999 killing process with pid 123282 00:26:20.999 01:56:20 -- common/autotest_common.sh@955 -- # kill 123282 00:26:20.999 [2024-04-24 01:56:20.875550] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.999 [2024-04-24 01:56:20.875655] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.999 01:56:20 -- common/autotest_common.sh@960 -- # wait 123282 00:26:22.378 ************************************ 00:26:22.378 END TEST raid_state_function_test 00:26:22.378 ************************************ 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:22.378 00:26:22.378 real 0m12.908s 00:26:22.378 user 0m22.051s 00:26:22.378 sys 0m1.777s 00:26:22.378 01:56:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:22.378 01:56:22 -- common/autotest_common.sh@10 -- # set +x 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:26:22.378 01:56:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:22.378 01:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:22.378 01:56:22 -- common/autotest_common.sh@10 -- # set +x 00:26:22.378 ************************************ 00:26:22.378 START TEST raid_state_function_test_sb 00:26:22.378 ************************************ 00:26:22.378 01:56:22 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 true 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=123675 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123675' 00:26:22.378 Process raid pid: 123675 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123675 /var/tmp/spdk-raid.sock 00:26:22.378 01:56:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:22.378 01:56:22 -- common/autotest_common.sh@817 -- # '[' -z 123675 ']' 00:26:22.379 01:56:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:22.379 01:56:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:22.379 01:56:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:22.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:22.379 01:56:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:22.379 01:56:22 -- common/autotest_common.sh@10 -- # set +x 00:26:22.379 [2024-04-24 01:56:22.453351] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:26:22.379 [2024-04-24 01:56:22.453557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.638 [2024-04-24 01:56:22.631229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.909 [2024-04-24 01:56:22.882377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.171 [2024-04-24 01:56:23.128265] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.429 01:56:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.430 01:56:23 -- common/autotest_common.sh@850 -- # return 0 00:26:23.430 01:56:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:23.688 [2024-04-24 01:56:23.629783] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.688 [2024-04-24 01:56:23.629867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.688 [2024-04-24 01:56:23.629878] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:23.688 [2024-04-24 01:56:23.629915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:23.688 [2024-04-24 01:56:23.629947] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:23.688 [2024-04-24 01:56:23.629990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.688 01:56:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.947 01:56:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.947 "name": "Existed_Raid", 00:26:23.947 "uuid": "906f05d2-0198-4952-a314-2fca8deabaa3", 00:26:23.947 "strip_size_kb": 64, 00:26:23.947 "state": "configuring", 00:26:23.947 "raid_level": "raid0", 00:26:23.947 "superblock": true, 00:26:23.947 "num_base_bdevs": 3, 00:26:23.947 "num_base_bdevs_discovered": 0, 00:26:23.947 "num_base_bdevs_operational": 3, 00:26:23.947 "base_bdevs_list": [ 00:26:23.947 { 00:26:23.947 "name": "BaseBdev1", 00:26:23.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.947 "is_configured": false, 00:26:23.947 "data_offset": 0, 00:26:23.947 "data_size": 0 00:26:23.947 }, 00:26:23.947 { 00:26:23.947 "name": "BaseBdev2", 00:26:23.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.947 "is_configured": false, 00:26:23.947 "data_offset": 0, 00:26:23.947 "data_size": 0 00:26:23.947 }, 00:26:23.947 { 00:26:23.947 "name": "BaseBdev3", 00:26:23.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.947 "is_configured": false, 00:26:23.947 "data_offset": 0, 00:26:23.947 "data_size": 0 00:26:23.947 } 00:26:23.947 ] 00:26:23.947 }' 00:26:23.947 01:56:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.947 01:56:23 -- common/autotest_common.sh@10 -- # set +x 00:26:24.515 01:56:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:24.774 [2024-04-24 01:56:24.709816] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:24.774 [2024-04-24 01:56:24.709861] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:24.774 01:56:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:25.033 [2024-04-24 01:56:25.025941] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:25.033 [2024-04-24 01:56:25.026020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:25.033 [2024-04-24 01:56:25.026032] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:25.033 [2024-04-24 01:56:25.026052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:25.033 [2024-04-24 01:56:25.026060] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:25.033 [2024-04-24 01:56:25.026085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:25.033 01:56:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:25.291 [2024-04-24 01:56:25.291645] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:25.291 BaseBdev1 00:26:25.291 01:56:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:25.291 01:56:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:25.291 01:56:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:25.291 01:56:25 -- common/autotest_common.sh@887 -- # local i 00:26:25.291 01:56:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:25.291 01:56:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:25.291 01:56:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:25.549 01:56:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:25.806 [ 00:26:25.806 { 00:26:25.806 "name": "BaseBdev1", 00:26:25.806 "aliases": [ 00:26:25.806 "67c144c3-5a6c-493b-bc13-aec34d3465ca" 00:26:25.806 ], 00:26:25.806 "product_name": "Malloc disk", 00:26:25.806 "block_size": 512, 00:26:25.806 "num_blocks": 65536, 00:26:25.806 "uuid": "67c144c3-5a6c-493b-bc13-aec34d3465ca", 00:26:25.806 "assigned_rate_limits": { 00:26:25.806 "rw_ios_per_sec": 0, 00:26:25.806 "rw_mbytes_per_sec": 0, 00:26:25.806 "r_mbytes_per_sec": 0, 00:26:25.806 "w_mbytes_per_sec": 0 00:26:25.806 }, 00:26:25.806 "claimed": true, 00:26:25.806 "claim_type": "exclusive_write", 00:26:25.806 "zoned": false, 00:26:25.806 "supported_io_types": { 00:26:25.806 "read": true, 00:26:25.806 "write": true, 00:26:25.806 "unmap": true, 00:26:25.806 "write_zeroes": true, 00:26:25.806 "flush": true, 00:26:25.806 "reset": true, 00:26:25.806 "compare": false, 00:26:25.806 "compare_and_write": false, 00:26:25.806 "abort": true, 00:26:25.806 "nvme_admin": false, 00:26:25.806 "nvme_io": false 00:26:25.806 }, 00:26:25.806 "memory_domains": [ 00:26:25.806 { 00:26:25.806 "dma_device_id": "system", 00:26:25.806 "dma_device_type": 1 00:26:25.806 }, 00:26:25.806 { 00:26:25.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.806 "dma_device_type": 2 00:26:25.806 } 00:26:25.806 ], 00:26:25.806 "driver_specific": {} 00:26:25.807 } 00:26:25.807 ] 00:26:26.065 01:56:25 -- common/autotest_common.sh@893 -- # return 0 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.065 01:56:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.324 01:56:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.324 "name": "Existed_Raid", 00:26:26.324 "uuid": "f6a99bcf-59b6-4346-a68b-26e31c7a8869", 00:26:26.324 "strip_size_kb": 64, 00:26:26.324 "state": "configuring", 00:26:26.324 "raid_level": "raid0", 00:26:26.324 "superblock": true, 00:26:26.324 "num_base_bdevs": 3, 00:26:26.324 "num_base_bdevs_discovered": 1, 00:26:26.324 "num_base_bdevs_operational": 3, 00:26:26.324 "base_bdevs_list": [ 00:26:26.324 { 00:26:26.324 "name": "BaseBdev1", 00:26:26.324 "uuid": "67c144c3-5a6c-493b-bc13-aec34d3465ca", 00:26:26.324 "is_configured": true, 00:26:26.324 "data_offset": 2048, 00:26:26.324 "data_size": 63488 00:26:26.324 }, 00:26:26.324 { 00:26:26.324 "name": "BaseBdev2", 00:26:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.324 "is_configured": false, 00:26:26.324 "data_offset": 0, 00:26:26.324 "data_size": 0 00:26:26.324 }, 00:26:26.324 { 00:26:26.324 "name": "BaseBdev3", 00:26:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.324 "is_configured": false, 00:26:26.324 "data_offset": 0, 00:26:26.324 "data_size": 0 00:26:26.324 } 00:26:26.324 ] 00:26:26.324 }' 00:26:26.324 01:56:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.324 01:56:26 -- common/autotest_common.sh@10 -- # set +x 00:26:26.892 01:56:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:27.150 [2024-04-24 01:56:27.108086] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:27.150 [2024-04-24 01:56:27.108161] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:27.150 01:56:27 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:27.150 01:56:27 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:27.408 01:56:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:27.666 BaseBdev1 00:26:27.666 01:56:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:27.666 01:56:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:27.666 01:56:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:27.666 01:56:27 -- common/autotest_common.sh@887 -- # local i 00:26:27.666 01:56:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:27.666 01:56:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:27.666 01:56:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.924 01:56:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:28.182 [ 00:26:28.182 { 00:26:28.182 "name": "BaseBdev1", 00:26:28.182 "aliases": [ 00:26:28.182 "e9351be9-8187-4599-a68f-88ba7f47ce60" 00:26:28.182 ], 00:26:28.182 "product_name": "Malloc disk", 00:26:28.182 "block_size": 512, 00:26:28.182 "num_blocks": 65536, 00:26:28.182 "uuid": "e9351be9-8187-4599-a68f-88ba7f47ce60", 00:26:28.182 "assigned_rate_limits": { 00:26:28.182 "rw_ios_per_sec": 0, 00:26:28.182 "rw_mbytes_per_sec": 0, 00:26:28.182 "r_mbytes_per_sec": 0, 00:26:28.182 "w_mbytes_per_sec": 0 00:26:28.182 }, 00:26:28.182 "claimed": false, 00:26:28.182 "zoned": false, 00:26:28.182 "supported_io_types": { 00:26:28.182 "read": true, 00:26:28.182 "write": true, 00:26:28.182 "unmap": true, 00:26:28.182 "write_zeroes": true, 00:26:28.182 "flush": true, 00:26:28.182 "reset": true, 00:26:28.182 "compare": false, 00:26:28.182 "compare_and_write": false, 00:26:28.182 "abort": true, 00:26:28.182 "nvme_admin": false, 00:26:28.182 "nvme_io": false 00:26:28.182 }, 00:26:28.182 "memory_domains": [ 00:26:28.182 { 00:26:28.182 "dma_device_id": "system", 00:26:28.182 "dma_device_type": 1 00:26:28.182 }, 00:26:28.182 { 00:26:28.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.183 "dma_device_type": 2 00:26:28.183 } 00:26:28.183 ], 00:26:28.183 "driver_specific": {} 00:26:28.183 } 00:26:28.183 ] 00:26:28.183 01:56:28 -- common/autotest_common.sh@893 -- # return 0 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:28.183 [2024-04-24 01:56:28.230244] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:28.183 [2024-04-24 01:56:28.232250] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:28.183 [2024-04-24 01:56:28.232306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:28.183 [2024-04-24 01:56:28.232315] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:28.183 [2024-04-24 01:56:28.232340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.183 01:56:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.441 01:56:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.441 "name": "Existed_Raid", 00:26:28.442 "uuid": "e4dd7c82-598d-492e-84f2-886db954d345", 00:26:28.442 "strip_size_kb": 64, 00:26:28.442 "state": "configuring", 00:26:28.442 "raid_level": "raid0", 00:26:28.442 "superblock": true, 00:26:28.442 "num_base_bdevs": 3, 00:26:28.442 "num_base_bdevs_discovered": 1, 00:26:28.442 "num_base_bdevs_operational": 3, 00:26:28.442 "base_bdevs_list": [ 00:26:28.442 { 00:26:28.442 "name": "BaseBdev1", 00:26:28.442 "uuid": "e9351be9-8187-4599-a68f-88ba7f47ce60", 00:26:28.442 "is_configured": true, 00:26:28.442 "data_offset": 2048, 00:26:28.442 "data_size": 63488 00:26:28.442 }, 00:26:28.442 { 00:26:28.442 "name": "BaseBdev2", 00:26:28.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.442 "is_configured": false, 00:26:28.442 "data_offset": 0, 00:26:28.442 "data_size": 0 00:26:28.442 }, 00:26:28.442 { 00:26:28.442 "name": "BaseBdev3", 00:26:28.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.442 "is_configured": false, 00:26:28.442 "data_offset": 0, 00:26:28.442 "data_size": 0 00:26:28.442 } 00:26:28.442 ] 00:26:28.442 }' 00:26:28.442 01:56:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.442 01:56:28 -- common/autotest_common.sh@10 -- # set +x 00:26:29.007 01:56:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:29.266 [2024-04-24 01:56:29.276516] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:29.266 BaseBdev2 00:26:29.266 01:56:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:29.266 01:56:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:29.266 01:56:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:29.266 01:56:29 -- common/autotest_common.sh@887 -- # local i 00:26:29.266 01:56:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:29.266 01:56:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:29.266 01:56:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:29.524 01:56:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:29.783 [ 00:26:29.783 { 00:26:29.783 "name": "BaseBdev2", 00:26:29.783 "aliases": [ 00:26:29.783 "bf0dce02-0996-44f1-a088-0b23e399cc89" 00:26:29.783 ], 00:26:29.783 "product_name": "Malloc disk", 00:26:29.783 "block_size": 512, 00:26:29.783 "num_blocks": 65536, 00:26:29.783 "uuid": "bf0dce02-0996-44f1-a088-0b23e399cc89", 00:26:29.783 "assigned_rate_limits": { 00:26:29.783 "rw_ios_per_sec": 0, 00:26:29.783 "rw_mbytes_per_sec": 0, 00:26:29.783 "r_mbytes_per_sec": 0, 00:26:29.783 "w_mbytes_per_sec": 0 00:26:29.783 }, 00:26:29.783 "claimed": true, 00:26:29.783 "claim_type": "exclusive_write", 00:26:29.783 "zoned": false, 00:26:29.783 "supported_io_types": { 00:26:29.783 "read": true, 00:26:29.783 "write": true, 00:26:29.783 "unmap": true, 00:26:29.783 "write_zeroes": true, 00:26:29.783 "flush": true, 00:26:29.783 "reset": true, 00:26:29.783 "compare": false, 00:26:29.783 "compare_and_write": false, 00:26:29.783 "abort": true, 00:26:29.783 "nvme_admin": false, 00:26:29.783 "nvme_io": false 00:26:29.783 }, 00:26:29.783 "memory_domains": [ 00:26:29.783 { 00:26:29.783 "dma_device_id": "system", 00:26:29.783 "dma_device_type": 1 00:26:29.783 }, 00:26:29.783 { 00:26:29.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.783 "dma_device_type": 2 00:26:29.783 } 00:26:29.783 ], 00:26:29.783 "driver_specific": {} 00:26:29.783 } 00:26:29.783 ] 00:26:29.783 01:56:29 -- common/autotest_common.sh@893 -- # return 0 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.783 01:56:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.041 01:56:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:30.041 "name": "Existed_Raid", 00:26:30.041 "uuid": "e4dd7c82-598d-492e-84f2-886db954d345", 00:26:30.041 "strip_size_kb": 64, 00:26:30.041 "state": "configuring", 00:26:30.041 "raid_level": "raid0", 00:26:30.041 "superblock": true, 00:26:30.041 "num_base_bdevs": 3, 00:26:30.041 "num_base_bdevs_discovered": 2, 00:26:30.041 "num_base_bdevs_operational": 3, 00:26:30.041 "base_bdevs_list": [ 00:26:30.041 { 00:26:30.041 "name": "BaseBdev1", 00:26:30.041 "uuid": "e9351be9-8187-4599-a68f-88ba7f47ce60", 00:26:30.041 "is_configured": true, 00:26:30.041 "data_offset": 2048, 00:26:30.041 "data_size": 63488 00:26:30.041 }, 00:26:30.041 { 00:26:30.041 "name": "BaseBdev2", 00:26:30.041 "uuid": "bf0dce02-0996-44f1-a088-0b23e399cc89", 00:26:30.041 "is_configured": true, 00:26:30.041 "data_offset": 2048, 00:26:30.041 "data_size": 63488 00:26:30.041 }, 00:26:30.041 { 00:26:30.041 "name": "BaseBdev3", 00:26:30.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.041 "is_configured": false, 00:26:30.041 "data_offset": 0, 00:26:30.041 "data_size": 0 00:26:30.041 } 00:26:30.041 ] 00:26:30.041 }' 00:26:30.041 01:56:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:30.041 01:56:29 -- common/autotest_common.sh@10 -- # set +x 00:26:30.618 01:56:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:30.876 [2024-04-24 01:56:30.887044] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:30.876 [2024-04-24 01:56:30.887270] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:30.876 [2024-04-24 01:56:30.887285] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:30.877 [2024-04-24 01:56:30.887418] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:26:30.877 [2024-04-24 01:56:30.887764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:30.877 [2024-04-24 01:56:30.887794] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:30.877 [2024-04-24 01:56:30.887942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.877 BaseBdev3 00:26:30.877 01:56:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:30.877 01:56:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:30.877 01:56:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:30.877 01:56:30 -- common/autotest_common.sh@887 -- # local i 00:26:30.877 01:56:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:30.877 01:56:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:30.877 01:56:30 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:31.134 01:56:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:31.392 [ 00:26:31.392 { 00:26:31.392 "name": "BaseBdev3", 00:26:31.392 "aliases": [ 00:26:31.392 "8561a18f-b2dc-4df3-bc14-6f3ed3c1341b" 00:26:31.392 ], 00:26:31.392 "product_name": "Malloc disk", 00:26:31.392 "block_size": 512, 00:26:31.392 "num_blocks": 65536, 00:26:31.392 "uuid": "8561a18f-b2dc-4df3-bc14-6f3ed3c1341b", 00:26:31.392 "assigned_rate_limits": { 00:26:31.392 "rw_ios_per_sec": 0, 00:26:31.392 "rw_mbytes_per_sec": 0, 00:26:31.392 "r_mbytes_per_sec": 0, 00:26:31.392 "w_mbytes_per_sec": 0 00:26:31.392 }, 00:26:31.392 "claimed": true, 00:26:31.392 "claim_type": "exclusive_write", 00:26:31.392 "zoned": false, 00:26:31.392 "supported_io_types": { 00:26:31.392 "read": true, 00:26:31.392 "write": true, 00:26:31.392 "unmap": true, 00:26:31.392 "write_zeroes": true, 00:26:31.392 "flush": true, 00:26:31.392 "reset": true, 00:26:31.392 "compare": false, 00:26:31.392 "compare_and_write": false, 00:26:31.392 "abort": true, 00:26:31.392 "nvme_admin": false, 00:26:31.392 "nvme_io": false 00:26:31.392 }, 00:26:31.392 "memory_domains": [ 00:26:31.392 { 00:26:31.392 "dma_device_id": "system", 00:26:31.392 "dma_device_type": 1 00:26:31.392 }, 00:26:31.392 { 00:26:31.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.392 "dma_device_type": 2 00:26:31.392 } 00:26:31.392 ], 00:26:31.392 "driver_specific": {} 00:26:31.392 } 00:26:31.392 ] 00:26:31.392 01:56:31 -- common/autotest_common.sh@893 -- # return 0 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.392 01:56:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.650 01:56:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.650 "name": "Existed_Raid", 00:26:31.650 "uuid": "e4dd7c82-598d-492e-84f2-886db954d345", 00:26:31.650 "strip_size_kb": 64, 00:26:31.650 "state": "online", 00:26:31.650 "raid_level": "raid0", 00:26:31.650 "superblock": true, 00:26:31.650 "num_base_bdevs": 3, 00:26:31.650 "num_base_bdevs_discovered": 3, 00:26:31.650 "num_base_bdevs_operational": 3, 00:26:31.650 "base_bdevs_list": [ 00:26:31.650 { 00:26:31.650 "name": "BaseBdev1", 00:26:31.650 "uuid": "e9351be9-8187-4599-a68f-88ba7f47ce60", 00:26:31.650 "is_configured": true, 00:26:31.650 "data_offset": 2048, 00:26:31.650 "data_size": 63488 00:26:31.650 }, 00:26:31.650 { 00:26:31.650 "name": "BaseBdev2", 00:26:31.650 "uuid": "bf0dce02-0996-44f1-a088-0b23e399cc89", 00:26:31.650 "is_configured": true, 00:26:31.650 "data_offset": 2048, 00:26:31.650 "data_size": 63488 00:26:31.650 }, 00:26:31.650 { 00:26:31.650 "name": "BaseBdev3", 00:26:31.650 "uuid": "8561a18f-b2dc-4df3-bc14-6f3ed3c1341b", 00:26:31.650 "is_configured": true, 00:26:31.650 "data_offset": 2048, 00:26:31.650 "data_size": 63488 00:26:31.650 } 00:26:31.650 ] 00:26:31.650 }' 00:26:31.650 01:56:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.650 01:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:32.215 01:56:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:32.473 [2024-04-24 01:56:32.364553] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:32.473 [2024-04-24 01:56:32.364594] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.473 [2024-04-24 01:56:32.364668] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.473 01:56:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.731 01:56:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:32.731 "name": "Existed_Raid", 00:26:32.731 "uuid": "e4dd7c82-598d-492e-84f2-886db954d345", 00:26:32.731 "strip_size_kb": 64, 00:26:32.731 "state": "offline", 00:26:32.731 "raid_level": "raid0", 00:26:32.731 "superblock": true, 00:26:32.731 "num_base_bdevs": 3, 00:26:32.731 "num_base_bdevs_discovered": 2, 00:26:32.731 "num_base_bdevs_operational": 2, 00:26:32.731 "base_bdevs_list": [ 00:26:32.731 { 00:26:32.731 "name": null, 00:26:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.731 "is_configured": false, 00:26:32.731 "data_offset": 2048, 00:26:32.731 "data_size": 63488 00:26:32.731 }, 00:26:32.731 { 00:26:32.731 "name": "BaseBdev2", 00:26:32.731 "uuid": "bf0dce02-0996-44f1-a088-0b23e399cc89", 00:26:32.731 "is_configured": true, 00:26:32.731 "data_offset": 2048, 00:26:32.731 "data_size": 63488 00:26:32.731 }, 00:26:32.731 { 00:26:32.731 "name": "BaseBdev3", 00:26:32.731 "uuid": "8561a18f-b2dc-4df3-bc14-6f3ed3c1341b", 00:26:32.731 "is_configured": true, 00:26:32.731 "data_offset": 2048, 00:26:32.731 "data_size": 63488 00:26:32.731 } 00:26:32.731 ] 00:26:32.731 }' 00:26:32.731 01:56:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:32.731 01:56:32 -- common/autotest_common.sh@10 -- # set +x 00:26:33.321 01:56:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:33.321 01:56:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:33.321 01:56:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:33.321 01:56:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.579 01:56:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:33.579 01:56:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:33.579 01:56:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:33.836 [2024-04-24 01:56:33.699191] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:33.836 01:56:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:33.836 01:56:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:33.836 01:56:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.836 01:56:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:34.094 01:56:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:34.094 01:56:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:34.094 01:56:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:34.351 [2024-04-24 01:56:34.354858] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:34.351 [2024-04-24 01:56:34.354925] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:34.608 01:56:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:34.609 01:56:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:34.609 01:56:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:34.609 01:56:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.866 01:56:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:34.866 01:56:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:34.866 01:56:34 -- bdev/bdev_raid.sh@287 -- # killprocess 123675 00:26:34.866 01:56:34 -- common/autotest_common.sh@936 -- # '[' -z 123675 ']' 00:26:34.866 01:56:34 -- common/autotest_common.sh@940 -- # kill -0 123675 00:26:34.866 01:56:34 -- common/autotest_common.sh@941 -- # uname 00:26:34.866 01:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.866 01:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123675 00:26:34.866 01:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:34.866 01:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:34.866 killing process with pid 123675 00:26:34.866 01:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123675' 00:26:34.866 01:56:34 -- common/autotest_common.sh@955 -- # kill 123675 00:26:34.866 [2024-04-24 01:56:34.809676] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.866 01:56:34 -- common/autotest_common.sh@960 -- # wait 123675 00:26:34.866 [2024-04-24 01:56:34.809824] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.241 01:56:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:36.241 00:26:36.241 real 0m13.929s 00:26:36.241 user 0m23.615s 00:26:36.241 sys 0m1.953s 00:26:36.241 01:56:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:36.241 ************************************ 00:26:36.241 END TEST raid_state_function_test_sb 00:26:36.241 01:56:36 -- common/autotest_common.sh@10 -- # set +x 00:26:36.241 ************************************ 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:26:36.499 01:56:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:36.499 01:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.499 01:56:36 -- common/autotest_common.sh@10 -- # set +x 00:26:36.499 ************************************ 00:26:36.499 START TEST raid_superblock_test 00:26:36.499 ************************************ 00:26:36.499 01:56:36 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 3 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=124083 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:36.499 01:56:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124083 /var/tmp/spdk-raid.sock 00:26:36.499 01:56:36 -- common/autotest_common.sh@817 -- # '[' -z 124083 ']' 00:26:36.499 01:56:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:36.499 01:56:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:36.499 01:56:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:36.499 01:56:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:36.499 01:56:36 -- common/autotest_common.sh@10 -- # set +x 00:26:36.499 [2024-04-24 01:56:36.462495] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:26:36.499 [2024-04-24 01:56:36.462658] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124083 ] 00:26:36.757 [2024-04-24 01:56:36.627195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.016 [2024-04-24 01:56:36.862205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.274 [2024-04-24 01:56:37.119955] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.532 01:56:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:37.532 01:56:37 -- common/autotest_common.sh@850 -- # return 0 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:37.532 01:56:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:37.532 malloc1 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:37.791 [2024-04-24 01:56:37.819805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:37.791 [2024-04-24 01:56:37.820080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.791 [2024-04-24 01:56:37.820262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:37.791 [2024-04-24 01:56:37.820446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.791 [2024-04-24 01:56:37.823597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.791 [2024-04-24 01:56:37.823890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:37.791 pt1 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:37.791 01:56:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:38.049 malloc2 00:26:38.049 01:56:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:38.308 [2024-04-24 01:56:38.306786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:38.308 [2024-04-24 01:56:38.307072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.309 [2024-04-24 01:56:38.307163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:38.309 [2024-04-24 01:56:38.307298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.309 [2024-04-24 01:56:38.310001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.309 [2024-04-24 01:56:38.310196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:38.309 pt2 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:38.309 01:56:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:38.575 malloc3 00:26:38.575 01:56:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:38.833 [2024-04-24 01:56:38.819421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:38.833 [2024-04-24 01:56:38.819751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.833 [2024-04-24 01:56:38.819845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:38.834 [2024-04-24 01:56:38.819994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.834 [2024-04-24 01:56:38.822626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.834 [2024-04-24 01:56:38.822833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:38.834 pt3 00:26:38.834 01:56:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:38.834 01:56:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:38.834 01:56:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:39.090 [2024-04-24 01:56:39.035629] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:39.090 [2024-04-24 01:56:39.038017] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.090 [2024-04-24 01:56:39.038230] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.090 [2024-04-24 01:56:39.038543] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:26:39.090 [2024-04-24 01:56:39.038646] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:39.090 [2024-04-24 01:56:39.038846] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:39.090 [2024-04-24 01:56:39.039317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:26:39.090 [2024-04-24 01:56:39.039426] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:26:39.090 [2024-04-24 01:56:39.039721] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.090 01:56:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.349 01:56:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:39.349 "name": "raid_bdev1", 00:26:39.349 "uuid": "baccf233-17d4-4742-8fa7-e24440339ca1", 00:26:39.349 "strip_size_kb": 64, 00:26:39.349 "state": "online", 00:26:39.349 "raid_level": "raid0", 00:26:39.349 "superblock": true, 00:26:39.349 "num_base_bdevs": 3, 00:26:39.349 "num_base_bdevs_discovered": 3, 00:26:39.349 "num_base_bdevs_operational": 3, 00:26:39.349 "base_bdevs_list": [ 00:26:39.349 { 00:26:39.349 "name": "pt1", 00:26:39.349 "uuid": "322699c2-d702-5686-a34b-94a4dda7a121", 00:26:39.349 "is_configured": true, 00:26:39.349 "data_offset": 2048, 00:26:39.349 "data_size": 63488 00:26:39.349 }, 00:26:39.349 { 00:26:39.349 "name": "pt2", 00:26:39.349 "uuid": "8d13c631-f9f1-5048-9ec7-2dd94afa91d0", 00:26:39.349 "is_configured": true, 00:26:39.349 "data_offset": 2048, 00:26:39.349 "data_size": 63488 00:26:39.349 }, 00:26:39.349 { 00:26:39.349 "name": "pt3", 00:26:39.349 "uuid": "b155aeb6-16a4-5170-85fc-d8a0e891ede0", 00:26:39.349 "is_configured": true, 00:26:39.349 "data_offset": 2048, 00:26:39.349 "data_size": 63488 00:26:39.349 } 00:26:39.349 ] 00:26:39.350 }' 00:26:39.350 01:56:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:39.350 01:56:39 -- common/autotest_common.sh@10 -- # set +x 00:26:39.917 01:56:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:39.917 01:56:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:40.175 [2024-04-24 01:56:40.064289] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.175 01:56:40 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=baccf233-17d4-4742-8fa7-e24440339ca1 00:26:40.175 01:56:40 -- bdev/bdev_raid.sh@380 -- # '[' -z baccf233-17d4-4742-8fa7-e24440339ca1 ']' 00:26:40.175 01:56:40 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:40.434 [2024-04-24 01:56:40.312035] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:40.434 [2024-04-24 01:56:40.312271] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:40.434 [2024-04-24 01:56:40.312435] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:40.434 [2024-04-24 01:56:40.312605] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:40.434 [2024-04-24 01:56:40.312712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:26:40.434 01:56:40 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:40.434 01:56:40 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.692 01:56:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:40.692 01:56:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:40.692 01:56:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:40.692 01:56:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:40.951 01:56:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:40.951 01:56:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:40.951 01:56:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:40.951 01:56:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:41.209 01:56:41 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:41.209 01:56:41 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:41.467 01:56:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:41.467 01:56:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:41.467 01:56:41 -- common/autotest_common.sh@638 -- # local es=0 00:26:41.467 01:56:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:41.467 01:56:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.467 01:56:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:41.467 01:56:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.467 01:56:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:41.467 01:56:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.467 01:56:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:41.467 01:56:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.467 01:56:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:41.467 01:56:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:41.726 [2024-04-24 01:56:41.644317] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:41.726 [2024-04-24 01:56:41.646793] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:41.726 [2024-04-24 01:56:41.647038] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:41.726 [2024-04-24 01:56:41.647131] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:41.726 [2024-04-24 01:56:41.647444] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:41.726 [2024-04-24 01:56:41.647623] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:41.726 [2024-04-24 01:56:41.647706] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:41.726 [2024-04-24 01:56:41.647750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:26:41.726 request: 00:26:41.726 { 00:26:41.726 "name": "raid_bdev1", 00:26:41.726 "raid_level": "raid0", 00:26:41.726 "base_bdevs": [ 00:26:41.726 "malloc1", 00:26:41.726 "malloc2", 00:26:41.726 "malloc3" 00:26:41.726 ], 00:26:41.726 "superblock": false, 00:26:41.726 "strip_size_kb": 64, 00:26:41.726 "method": "bdev_raid_create", 00:26:41.726 "req_id": 1 00:26:41.726 } 00:26:41.726 Got JSON-RPC error response 00:26:41.726 response: 00:26:41.726 { 00:26:41.726 "code": -17, 00:26:41.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:41.726 } 00:26:41.726 01:56:41 -- common/autotest_common.sh@641 -- # es=1 00:26:41.726 01:56:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:41.726 01:56:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:41.726 01:56:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:41.726 01:56:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:41.726 01:56:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.990 01:56:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:41.990 01:56:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:41.990 01:56:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:42.253 [2024-04-24 01:56:42.100470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:42.253 [2024-04-24 01:56:42.100725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.253 [2024-04-24 01:56:42.100883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:42.253 [2024-04-24 01:56:42.100983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.253 [2024-04-24 01:56:42.103702] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.253 [2024-04-24 01:56:42.103876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:42.253 [2024-04-24 01:56:42.104090] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:42.253 [2024-04-24 01:56:42.104260] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:42.253 pt1 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.253 01:56:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.511 01:56:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:42.511 "name": "raid_bdev1", 00:26:42.511 "uuid": "baccf233-17d4-4742-8fa7-e24440339ca1", 00:26:42.511 "strip_size_kb": 64, 00:26:42.511 "state": "configuring", 00:26:42.511 "raid_level": "raid0", 00:26:42.511 "superblock": true, 00:26:42.511 "num_base_bdevs": 3, 00:26:42.511 "num_base_bdevs_discovered": 1, 00:26:42.511 "num_base_bdevs_operational": 3, 00:26:42.511 "base_bdevs_list": [ 00:26:42.511 { 00:26:42.511 "name": "pt1", 00:26:42.511 "uuid": "322699c2-d702-5686-a34b-94a4dda7a121", 00:26:42.511 "is_configured": true, 00:26:42.511 "data_offset": 2048, 00:26:42.511 "data_size": 63488 00:26:42.511 }, 00:26:42.511 { 00:26:42.511 "name": null, 00:26:42.511 "uuid": "8d13c631-f9f1-5048-9ec7-2dd94afa91d0", 00:26:42.511 "is_configured": false, 00:26:42.511 "data_offset": 2048, 00:26:42.511 "data_size": 63488 00:26:42.511 }, 00:26:42.511 { 00:26:42.511 "name": null, 00:26:42.511 "uuid": "b155aeb6-16a4-5170-85fc-d8a0e891ede0", 00:26:42.511 "is_configured": false, 00:26:42.511 "data_offset": 2048, 00:26:42.511 "data_size": 63488 00:26:42.511 } 00:26:42.511 ] 00:26:42.511 }' 00:26:42.511 01:56:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:42.511 01:56:42 -- common/autotest_common.sh@10 -- # set +x 00:26:43.077 01:56:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:26:43.077 01:56:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:43.077 [2024-04-24 01:56:43.136785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:43.077 [2024-04-24 01:56:43.136906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.077 [2024-04-24 01:56:43.136961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:43.077 [2024-04-24 01:56:43.136984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.077 [2024-04-24 01:56:43.137516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.077 [2024-04-24 01:56:43.137559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:43.077 [2024-04-24 01:56:43.137710] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:43.077 [2024-04-24 01:56:43.137735] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:43.077 pt2 00:26:43.077 01:56:43 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:43.335 [2024-04-24 01:56:43.348917] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.335 01:56:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.592 01:56:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:43.592 "name": "raid_bdev1", 00:26:43.592 "uuid": "baccf233-17d4-4742-8fa7-e24440339ca1", 00:26:43.592 "strip_size_kb": 64, 00:26:43.592 "state": "configuring", 00:26:43.592 "raid_level": "raid0", 00:26:43.592 "superblock": true, 00:26:43.592 "num_base_bdevs": 3, 00:26:43.592 "num_base_bdevs_discovered": 1, 00:26:43.592 "num_base_bdevs_operational": 3, 00:26:43.592 "base_bdevs_list": [ 00:26:43.592 { 00:26:43.592 "name": "pt1", 00:26:43.592 "uuid": "322699c2-d702-5686-a34b-94a4dda7a121", 00:26:43.592 "is_configured": true, 00:26:43.592 "data_offset": 2048, 00:26:43.592 "data_size": 63488 00:26:43.592 }, 00:26:43.592 { 00:26:43.592 "name": null, 00:26:43.592 "uuid": "8d13c631-f9f1-5048-9ec7-2dd94afa91d0", 00:26:43.592 "is_configured": false, 00:26:43.592 "data_offset": 2048, 00:26:43.592 "data_size": 63488 00:26:43.592 }, 00:26:43.592 { 00:26:43.592 "name": null, 00:26:43.592 "uuid": "b155aeb6-16a4-5170-85fc-d8a0e891ede0", 00:26:43.592 "is_configured": false, 00:26:43.592 "data_offset": 2048, 00:26:43.592 "data_size": 63488 00:26:43.592 } 00:26:43.592 ] 00:26:43.592 }' 00:26:43.592 01:56:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:43.592 01:56:43 -- common/autotest_common.sh@10 -- # set +x 00:26:44.157 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:44.157 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:44.157 01:56:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:44.415 [2024-04-24 01:56:44.405134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:44.415 [2024-04-24 01:56:44.405248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.415 [2024-04-24 01:56:44.405290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:44.415 [2024-04-24 01:56:44.405321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.415 [2024-04-24 01:56:44.405862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.415 [2024-04-24 01:56:44.405909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:44.415 [2024-04-24 01:56:44.406050] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:44.415 [2024-04-24 01:56:44.406073] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:44.415 pt2 00:26:44.415 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:44.415 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:44.415 01:56:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:44.673 [2024-04-24 01:56:44.669148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:44.673 [2024-04-24 01:56:44.669236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.673 [2024-04-24 01:56:44.669273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:44.673 [2024-04-24 01:56:44.669302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.673 [2024-04-24 01:56:44.669796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.673 [2024-04-24 01:56:44.669851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:44.673 [2024-04-24 01:56:44.669988] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:44.673 [2024-04-24 01:56:44.670011] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:44.673 [2024-04-24 01:56:44.670131] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:44.673 [2024-04-24 01:56:44.670149] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:44.673 [2024-04-24 01:56:44.670269] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:44.673 [2024-04-24 01:56:44.670589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:44.673 [2024-04-24 01:56:44.670610] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:26:44.673 [2024-04-24 01:56:44.670760] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.673 pt3 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.673 01:56:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.931 01:56:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:44.931 "name": "raid_bdev1", 00:26:44.931 "uuid": "baccf233-17d4-4742-8fa7-e24440339ca1", 00:26:44.931 "strip_size_kb": 64, 00:26:44.931 "state": "online", 00:26:44.931 "raid_level": "raid0", 00:26:44.931 "superblock": true, 00:26:44.931 "num_base_bdevs": 3, 00:26:44.931 "num_base_bdevs_discovered": 3, 00:26:44.931 "num_base_bdevs_operational": 3, 00:26:44.931 "base_bdevs_list": [ 00:26:44.931 { 00:26:44.931 "name": "pt1", 00:26:44.931 "uuid": "322699c2-d702-5686-a34b-94a4dda7a121", 00:26:44.931 "is_configured": true, 00:26:44.931 "data_offset": 2048, 00:26:44.931 "data_size": 63488 00:26:44.931 }, 00:26:44.931 { 00:26:44.931 "name": "pt2", 00:26:44.931 "uuid": "8d13c631-f9f1-5048-9ec7-2dd94afa91d0", 00:26:44.931 "is_configured": true, 00:26:44.931 "data_offset": 2048, 00:26:44.931 "data_size": 63488 00:26:44.931 }, 00:26:44.931 { 00:26:44.931 "name": "pt3", 00:26:44.931 "uuid": "b155aeb6-16a4-5170-85fc-d8a0e891ede0", 00:26:44.931 "is_configured": true, 00:26:44.931 "data_offset": 2048, 00:26:44.931 "data_size": 63488 00:26:44.931 } 00:26:44.931 ] 00:26:44.931 }' 00:26:44.931 01:56:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:44.931 01:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:45.497 01:56:45 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:45.497 01:56:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:45.756 [2024-04-24 01:56:45.741623] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.756 01:56:45 -- bdev/bdev_raid.sh@430 -- # '[' baccf233-17d4-4742-8fa7-e24440339ca1 '!=' baccf233-17d4-4742-8fa7-e24440339ca1 ']' 00:26:45.756 01:56:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:26:45.756 01:56:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:45.756 01:56:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:26:45.756 01:56:45 -- bdev/bdev_raid.sh@511 -- # killprocess 124083 00:26:45.756 01:56:45 -- common/autotest_common.sh@936 -- # '[' -z 124083 ']' 00:26:45.756 01:56:45 -- common/autotest_common.sh@940 -- # kill -0 124083 00:26:45.756 01:56:45 -- common/autotest_common.sh@941 -- # uname 00:26:45.756 01:56:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:45.756 01:56:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124083 00:26:45.756 01:56:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:45.756 01:56:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:45.756 killing process with pid 124083 00:26:45.756 01:56:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124083' 00:26:45.756 01:56:45 -- common/autotest_common.sh@955 -- # kill 124083 00:26:45.756 [2024-04-24 01:56:45.793587] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:45.756 [2024-04-24 01:56:45.793664] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.756 [2024-04-24 01:56:45.793722] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.756 [2024-04-24 01:56:45.793731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:26:45.756 01:56:45 -- common/autotest_common.sh@960 -- # wait 124083 00:26:46.322 [2024-04-24 01:56:46.129220] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:47.695 01:56:47 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:47.696 00:26:47.696 real 0m11.145s 00:26:47.696 user 0m18.569s 00:26:47.696 sys 0m1.557s 00:26:47.696 01:56:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:47.696 01:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:47.696 ************************************ 00:26:47.696 END TEST raid_superblock_test 00:26:47.696 ************************************ 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:26:47.696 01:56:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:47.696 01:56:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.696 01:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:47.696 ************************************ 00:26:47.696 START TEST raid_state_function_test 00:26:47.696 ************************************ 00:26:47.696 01:56:47 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 false 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=124404 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124404' 00:26:47.696 Process raid pid: 124404 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:47.696 01:56:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124404 /var/tmp/spdk-raid.sock 00:26:47.696 01:56:47 -- common/autotest_common.sh@817 -- # '[' -z 124404 ']' 00:26:47.696 01:56:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:47.696 01:56:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:47.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:47.696 01:56:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:47.696 01:56:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:47.696 01:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:47.696 [2024-04-24 01:56:47.692441] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:26:47.696 [2024-04-24 01:56:47.692573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.955 [2024-04-24 01:56:47.848734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.252 [2024-04-24 01:56:48.113499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.532 [2024-04-24 01:56:48.363774] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.796 01:56:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:48.796 01:56:48 -- common/autotest_common.sh@850 -- # return 0 00:26:48.796 01:56:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:49.055 [2024-04-24 01:56:48.928482] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.055 [2024-04-24 01:56:48.928625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.055 [2024-04-24 01:56:48.928643] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:49.055 [2024-04-24 01:56:48.928671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:49.055 [2024-04-24 01:56:48.928681] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:49.055 [2024-04-24 01:56:48.928745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.055 01:56:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.332 01:56:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:49.332 "name": "Existed_Raid", 00:26:49.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.332 "strip_size_kb": 64, 00:26:49.332 "state": "configuring", 00:26:49.332 "raid_level": "concat", 00:26:49.332 "superblock": false, 00:26:49.332 "num_base_bdevs": 3, 00:26:49.332 "num_base_bdevs_discovered": 0, 00:26:49.332 "num_base_bdevs_operational": 3, 00:26:49.332 "base_bdevs_list": [ 00:26:49.332 { 00:26:49.332 "name": "BaseBdev1", 00:26:49.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.332 "is_configured": false, 00:26:49.332 "data_offset": 0, 00:26:49.332 "data_size": 0 00:26:49.332 }, 00:26:49.332 { 00:26:49.332 "name": "BaseBdev2", 00:26:49.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.332 "is_configured": false, 00:26:49.332 "data_offset": 0, 00:26:49.332 "data_size": 0 00:26:49.332 }, 00:26:49.332 { 00:26:49.332 "name": "BaseBdev3", 00:26:49.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.332 "is_configured": false, 00:26:49.332 "data_offset": 0, 00:26:49.332 "data_size": 0 00:26:49.332 } 00:26:49.332 ] 00:26:49.332 }' 00:26:49.332 01:56:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:49.332 01:56:49 -- common/autotest_common.sh@10 -- # set +x 00:26:49.591 01:56:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:49.851 [2024-04-24 01:56:49.808902] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:49.851 [2024-04-24 01:56:49.809208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:49.851 01:56:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:50.110 [2024-04-24 01:56:50.016975] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:50.110 [2024-04-24 01:56:50.017274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:50.110 [2024-04-24 01:56:50.017408] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:50.110 [2024-04-24 01:56:50.017482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:50.110 [2024-04-24 01:56:50.017633] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:50.110 [2024-04-24 01:56:50.017712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:50.110 01:56:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:50.369 [2024-04-24 01:56:50.266987] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:50.369 BaseBdev1 00:26:50.369 01:56:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:50.369 01:56:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:50.369 01:56:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:50.369 01:56:50 -- common/autotest_common.sh@887 -- # local i 00:26:50.369 01:56:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:50.369 01:56:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:50.369 01:56:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:50.627 01:56:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:50.885 [ 00:26:50.885 { 00:26:50.885 "name": "BaseBdev1", 00:26:50.885 "aliases": [ 00:26:50.885 "e1b7e8ab-e10e-4250-9922-004f993a3407" 00:26:50.885 ], 00:26:50.885 "product_name": "Malloc disk", 00:26:50.885 "block_size": 512, 00:26:50.885 "num_blocks": 65536, 00:26:50.885 "uuid": "e1b7e8ab-e10e-4250-9922-004f993a3407", 00:26:50.885 "assigned_rate_limits": { 00:26:50.885 "rw_ios_per_sec": 0, 00:26:50.885 "rw_mbytes_per_sec": 0, 00:26:50.885 "r_mbytes_per_sec": 0, 00:26:50.885 "w_mbytes_per_sec": 0 00:26:50.885 }, 00:26:50.885 "claimed": true, 00:26:50.885 "claim_type": "exclusive_write", 00:26:50.885 "zoned": false, 00:26:50.885 "supported_io_types": { 00:26:50.885 "read": true, 00:26:50.885 "write": true, 00:26:50.885 "unmap": true, 00:26:50.885 "write_zeroes": true, 00:26:50.885 "flush": true, 00:26:50.885 "reset": true, 00:26:50.885 "compare": false, 00:26:50.885 "compare_and_write": false, 00:26:50.885 "abort": true, 00:26:50.885 "nvme_admin": false, 00:26:50.885 "nvme_io": false 00:26:50.885 }, 00:26:50.885 "memory_domains": [ 00:26:50.885 { 00:26:50.885 "dma_device_id": "system", 00:26:50.885 "dma_device_type": 1 00:26:50.885 }, 00:26:50.885 { 00:26:50.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.885 "dma_device_type": 2 00:26:50.885 } 00:26:50.885 ], 00:26:50.885 "driver_specific": {} 00:26:50.885 } 00:26:50.885 ] 00:26:50.885 01:56:50 -- common/autotest_common.sh@893 -- # return 0 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:50.885 01:56:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:50.886 01:56:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:50.886 01:56:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:50.886 01:56:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:50.886 01:56:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.886 01:56:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.145 01:56:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.145 "name": "Existed_Raid", 00:26:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.145 "strip_size_kb": 64, 00:26:51.145 "state": "configuring", 00:26:51.145 "raid_level": "concat", 00:26:51.145 "superblock": false, 00:26:51.145 "num_base_bdevs": 3, 00:26:51.145 "num_base_bdevs_discovered": 1, 00:26:51.145 "num_base_bdevs_operational": 3, 00:26:51.145 "base_bdevs_list": [ 00:26:51.145 { 00:26:51.145 "name": "BaseBdev1", 00:26:51.145 "uuid": "e1b7e8ab-e10e-4250-9922-004f993a3407", 00:26:51.145 "is_configured": true, 00:26:51.145 "data_offset": 0, 00:26:51.145 "data_size": 65536 00:26:51.145 }, 00:26:51.145 { 00:26:51.145 "name": "BaseBdev2", 00:26:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.145 "is_configured": false, 00:26:51.145 "data_offset": 0, 00:26:51.145 "data_size": 0 00:26:51.145 }, 00:26:51.145 { 00:26:51.145 "name": "BaseBdev3", 00:26:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.145 "is_configured": false, 00:26:51.145 "data_offset": 0, 00:26:51.145 "data_size": 0 00:26:51.145 } 00:26:51.145 ] 00:26:51.145 }' 00:26:51.145 01:56:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.145 01:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:51.713 01:56:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:51.713 [2024-04-24 01:56:51.743415] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:51.713 [2024-04-24 01:56:51.743669] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:51.713 01:56:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:26:51.713 01:56:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:51.971 [2024-04-24 01:56:51.959495] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.971 [2024-04-24 01:56:51.961934] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:51.972 [2024-04-24 01:56:51.962134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:51.972 [2024-04-24 01:56:51.962252] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:51.972 [2024-04-24 01:56:51.962373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.972 01:56:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.230 01:56:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:52.230 "name": "Existed_Raid", 00:26:52.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.230 "strip_size_kb": 64, 00:26:52.230 "state": "configuring", 00:26:52.230 "raid_level": "concat", 00:26:52.230 "superblock": false, 00:26:52.230 "num_base_bdevs": 3, 00:26:52.230 "num_base_bdevs_discovered": 1, 00:26:52.230 "num_base_bdevs_operational": 3, 00:26:52.230 "base_bdevs_list": [ 00:26:52.230 { 00:26:52.230 "name": "BaseBdev1", 00:26:52.230 "uuid": "e1b7e8ab-e10e-4250-9922-004f993a3407", 00:26:52.230 "is_configured": true, 00:26:52.230 "data_offset": 0, 00:26:52.230 "data_size": 65536 00:26:52.230 }, 00:26:52.230 { 00:26:52.230 "name": "BaseBdev2", 00:26:52.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.230 "is_configured": false, 00:26:52.230 "data_offset": 0, 00:26:52.230 "data_size": 0 00:26:52.230 }, 00:26:52.230 { 00:26:52.230 "name": "BaseBdev3", 00:26:52.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.230 "is_configured": false, 00:26:52.230 "data_offset": 0, 00:26:52.230 "data_size": 0 00:26:52.230 } 00:26:52.230 ] 00:26:52.230 }' 00:26:52.230 01:56:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:52.230 01:56:52 -- common/autotest_common.sh@10 -- # set +x 00:26:52.797 01:56:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:53.052 [2024-04-24 01:56:53.032547] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:53.052 BaseBdev2 00:26:53.052 01:56:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:53.052 01:56:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:53.052 01:56:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:53.052 01:56:53 -- common/autotest_common.sh@887 -- # local i 00:26:53.052 01:56:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:53.052 01:56:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:53.052 01:56:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:53.309 01:56:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:53.566 [ 00:26:53.566 { 00:26:53.566 "name": "BaseBdev2", 00:26:53.566 "aliases": [ 00:26:53.566 "c03f9f23-065d-4a6d-a2d1-5b0ee82e629e" 00:26:53.566 ], 00:26:53.566 "product_name": "Malloc disk", 00:26:53.566 "block_size": 512, 00:26:53.566 "num_blocks": 65536, 00:26:53.566 "uuid": "c03f9f23-065d-4a6d-a2d1-5b0ee82e629e", 00:26:53.566 "assigned_rate_limits": { 00:26:53.566 "rw_ios_per_sec": 0, 00:26:53.566 "rw_mbytes_per_sec": 0, 00:26:53.566 "r_mbytes_per_sec": 0, 00:26:53.566 "w_mbytes_per_sec": 0 00:26:53.566 }, 00:26:53.566 "claimed": true, 00:26:53.566 "claim_type": "exclusive_write", 00:26:53.566 "zoned": false, 00:26:53.566 "supported_io_types": { 00:26:53.566 "read": true, 00:26:53.566 "write": true, 00:26:53.566 "unmap": true, 00:26:53.566 "write_zeroes": true, 00:26:53.566 "flush": true, 00:26:53.566 "reset": true, 00:26:53.566 "compare": false, 00:26:53.566 "compare_and_write": false, 00:26:53.566 "abort": true, 00:26:53.566 "nvme_admin": false, 00:26:53.566 "nvme_io": false 00:26:53.566 }, 00:26:53.566 "memory_domains": [ 00:26:53.566 { 00:26:53.566 "dma_device_id": "system", 00:26:53.566 "dma_device_type": 1 00:26:53.566 }, 00:26:53.566 { 00:26:53.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.566 "dma_device_type": 2 00:26:53.566 } 00:26:53.566 ], 00:26:53.566 "driver_specific": {} 00:26:53.566 } 00:26:53.566 ] 00:26:53.566 01:56:53 -- common/autotest_common.sh@893 -- # return 0 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.566 01:56:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.823 01:56:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:53.823 "name": "Existed_Raid", 00:26:53.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.823 "strip_size_kb": 64, 00:26:53.823 "state": "configuring", 00:26:53.823 "raid_level": "concat", 00:26:53.823 "superblock": false, 00:26:53.823 "num_base_bdevs": 3, 00:26:53.823 "num_base_bdevs_discovered": 2, 00:26:53.823 "num_base_bdevs_operational": 3, 00:26:53.823 "base_bdevs_list": [ 00:26:53.823 { 00:26:53.823 "name": "BaseBdev1", 00:26:53.823 "uuid": "e1b7e8ab-e10e-4250-9922-004f993a3407", 00:26:53.823 "is_configured": true, 00:26:53.823 "data_offset": 0, 00:26:53.823 "data_size": 65536 00:26:53.823 }, 00:26:53.823 { 00:26:53.823 "name": "BaseBdev2", 00:26:53.823 "uuid": "c03f9f23-065d-4a6d-a2d1-5b0ee82e629e", 00:26:53.823 "is_configured": true, 00:26:53.823 "data_offset": 0, 00:26:53.823 "data_size": 65536 00:26:53.823 }, 00:26:53.823 { 00:26:53.823 "name": "BaseBdev3", 00:26:53.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.823 "is_configured": false, 00:26:53.823 "data_offset": 0, 00:26:53.823 "data_size": 0 00:26:53.823 } 00:26:53.824 ] 00:26:53.824 }' 00:26:53.824 01:56:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:53.824 01:56:53 -- common/autotest_common.sh@10 -- # set +x 00:26:54.389 01:56:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:54.646 [2024-04-24 01:56:54.613169] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:54.646 [2024-04-24 01:56:54.613224] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:54.646 [2024-04-24 01:56:54.613233] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:54.646 [2024-04-24 01:56:54.613405] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:26:54.646 [2024-04-24 01:56:54.613765] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:54.646 [2024-04-24 01:56:54.613777] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:54.646 [2024-04-24 01:56:54.614052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.646 BaseBdev3 00:26:54.646 01:56:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:54.646 01:56:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:54.646 01:56:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:54.646 01:56:54 -- common/autotest_common.sh@887 -- # local i 00:26:54.646 01:56:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:54.647 01:56:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:54.647 01:56:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:54.904 01:56:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:55.162 [ 00:26:55.162 { 00:26:55.162 "name": "BaseBdev3", 00:26:55.162 "aliases": [ 00:26:55.162 "51af913a-35df-4c7c-a473-d12f7cc15d45" 00:26:55.162 ], 00:26:55.162 "product_name": "Malloc disk", 00:26:55.162 "block_size": 512, 00:26:55.162 "num_blocks": 65536, 00:26:55.162 "uuid": "51af913a-35df-4c7c-a473-d12f7cc15d45", 00:26:55.162 "assigned_rate_limits": { 00:26:55.162 "rw_ios_per_sec": 0, 00:26:55.162 "rw_mbytes_per_sec": 0, 00:26:55.162 "r_mbytes_per_sec": 0, 00:26:55.162 "w_mbytes_per_sec": 0 00:26:55.162 }, 00:26:55.162 "claimed": true, 00:26:55.162 "claim_type": "exclusive_write", 00:26:55.162 "zoned": false, 00:26:55.162 "supported_io_types": { 00:26:55.162 "read": true, 00:26:55.162 "write": true, 00:26:55.162 "unmap": true, 00:26:55.162 "write_zeroes": true, 00:26:55.162 "flush": true, 00:26:55.162 "reset": true, 00:26:55.162 "compare": false, 00:26:55.162 "compare_and_write": false, 00:26:55.162 "abort": true, 00:26:55.162 "nvme_admin": false, 00:26:55.162 "nvme_io": false 00:26:55.162 }, 00:26:55.162 "memory_domains": [ 00:26:55.162 { 00:26:55.162 "dma_device_id": "system", 00:26:55.162 "dma_device_type": 1 00:26:55.162 }, 00:26:55.162 { 00:26:55.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.162 "dma_device_type": 2 00:26:55.162 } 00:26:55.162 ], 00:26:55.162 "driver_specific": {} 00:26:55.162 } 00:26:55.162 ] 00:26:55.162 01:56:55 -- common/autotest_common.sh@893 -- # return 0 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.162 01:56:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.432 01:56:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:55.432 "name": "Existed_Raid", 00:26:55.432 "uuid": "184eec08-3d64-4b7f-a221-037129ef8ec3", 00:26:55.432 "strip_size_kb": 64, 00:26:55.432 "state": "online", 00:26:55.432 "raid_level": "concat", 00:26:55.432 "superblock": false, 00:26:55.432 "num_base_bdevs": 3, 00:26:55.432 "num_base_bdevs_discovered": 3, 00:26:55.432 "num_base_bdevs_operational": 3, 00:26:55.432 "base_bdevs_list": [ 00:26:55.432 { 00:26:55.432 "name": "BaseBdev1", 00:26:55.432 "uuid": "e1b7e8ab-e10e-4250-9922-004f993a3407", 00:26:55.432 "is_configured": true, 00:26:55.432 "data_offset": 0, 00:26:55.432 "data_size": 65536 00:26:55.432 }, 00:26:55.432 { 00:26:55.432 "name": "BaseBdev2", 00:26:55.432 "uuid": "c03f9f23-065d-4a6d-a2d1-5b0ee82e629e", 00:26:55.432 "is_configured": true, 00:26:55.432 "data_offset": 0, 00:26:55.432 "data_size": 65536 00:26:55.432 }, 00:26:55.432 { 00:26:55.432 "name": "BaseBdev3", 00:26:55.432 "uuid": "51af913a-35df-4c7c-a473-d12f7cc15d45", 00:26:55.432 "is_configured": true, 00:26:55.432 "data_offset": 0, 00:26:55.432 "data_size": 65536 00:26:55.432 } 00:26:55.433 ] 00:26:55.433 }' 00:26:55.433 01:56:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:55.433 01:56:55 -- common/autotest_common.sh@10 -- # set +x 00:26:56.089 01:56:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:56.089 [2024-04-24 01:56:56.089646] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:56.089 [2024-04-24 01:56:56.089697] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.089 [2024-04-24 01:56:56.089754] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.347 01:56:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.606 01:56:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:56.606 "name": "Existed_Raid", 00:26:56.606 "uuid": "184eec08-3d64-4b7f-a221-037129ef8ec3", 00:26:56.606 "strip_size_kb": 64, 00:26:56.606 "state": "offline", 00:26:56.606 "raid_level": "concat", 00:26:56.606 "superblock": false, 00:26:56.606 "num_base_bdevs": 3, 00:26:56.606 "num_base_bdevs_discovered": 2, 00:26:56.606 "num_base_bdevs_operational": 2, 00:26:56.606 "base_bdevs_list": [ 00:26:56.606 { 00:26:56.606 "name": null, 00:26:56.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.606 "is_configured": false, 00:26:56.606 "data_offset": 0, 00:26:56.606 "data_size": 65536 00:26:56.606 }, 00:26:56.606 { 00:26:56.606 "name": "BaseBdev2", 00:26:56.606 "uuid": "c03f9f23-065d-4a6d-a2d1-5b0ee82e629e", 00:26:56.606 "is_configured": true, 00:26:56.606 "data_offset": 0, 00:26:56.606 "data_size": 65536 00:26:56.606 }, 00:26:56.606 { 00:26:56.606 "name": "BaseBdev3", 00:26:56.606 "uuid": "51af913a-35df-4c7c-a473-d12f7cc15d45", 00:26:56.606 "is_configured": true, 00:26:56.606 "data_offset": 0, 00:26:56.606 "data_size": 65536 00:26:56.606 } 00:26:56.606 ] 00:26:56.606 }' 00:26:56.606 01:56:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:56.606 01:56:56 -- common/autotest_common.sh@10 -- # set +x 00:26:57.173 01:56:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:57.173 01:56:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:57.173 01:56:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.173 01:56:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:57.431 01:56:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:57.431 01:56:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:57.431 01:56:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:57.689 [2024-04-24 01:56:57.592922] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:57.689 01:56:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:57.689 01:56:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:57.689 01:56:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.689 01:56:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:57.948 01:56:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:57.948 01:56:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:57.948 01:56:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:58.207 [2024-04-24 01:56:58.236535] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:58.207 [2024-04-24 01:56:58.236607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:58.465 01:56:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:58.465 01:56:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:58.465 01:56:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.465 01:56:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:58.723 01:56:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:58.723 01:56:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:58.723 01:56:58 -- bdev/bdev_raid.sh@287 -- # killprocess 124404 00:26:58.723 01:56:58 -- common/autotest_common.sh@936 -- # '[' -z 124404 ']' 00:26:58.723 01:56:58 -- common/autotest_common.sh@940 -- # kill -0 124404 00:26:58.723 01:56:58 -- common/autotest_common.sh@941 -- # uname 00:26:58.723 01:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.723 01:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124404 00:26:58.723 01:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:58.723 01:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:58.723 01:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124404' 00:26:58.723 killing process with pid 124404 00:26:58.723 01:56:58 -- common/autotest_common.sh@955 -- # kill 124404 00:26:58.723 [2024-04-24 01:56:58.616133] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:58.723 [2024-04-24 01:56:58.616261] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.723 01:56:58 -- common/autotest_common.sh@960 -- # wait 124404 00:27:00.099 01:57:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:00.099 00:27:00.099 real 0m12.449s 00:27:00.099 user 0m20.836s 00:27:00.099 sys 0m1.927s 00:27:00.099 ************************************ 00:27:00.099 END TEST raid_state_function_test 00:27:00.099 ************************************ 00:27:00.099 01:57:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:00.099 01:57:00 -- common/autotest_common.sh@10 -- # set +x 00:27:00.099 01:57:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:27:00.099 01:57:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:00.099 01:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.099 01:57:00 -- common/autotest_common.sh@10 -- # set +x 00:27:00.099 ************************************ 00:27:00.099 START TEST raid_state_function_test_sb 00:27:00.099 ************************************ 00:27:00.100 01:57:00 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 true 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=124788 00:27:00.100 Process raid pid: 124788 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124788' 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124788 /var/tmp/spdk-raid.sock 00:27:00.100 01:57:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:00.100 01:57:00 -- common/autotest_common.sh@817 -- # '[' -z 124788 ']' 00:27:00.100 01:57:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:00.100 01:57:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:00.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:00.100 01:57:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:00.100 01:57:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:00.100 01:57:00 -- common/autotest_common.sh@10 -- # set +x 00:27:00.359 [2024-04-24 01:57:00.257309] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:27:00.359 [2024-04-24 01:57:00.257495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.359 [2024-04-24 01:57:00.442317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.925 [2024-04-24 01:57:00.725876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.925 [2024-04-24 01:57:00.965639] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.184 01:57:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.184 01:57:01 -- common/autotest_common.sh@850 -- # return 0 00:27:01.184 01:57:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:01.523 [2024-04-24 01:57:01.439887] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:01.523 [2024-04-24 01:57:01.440285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:01.523 [2024-04-24 01:57:01.440437] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:01.523 [2024-04-24 01:57:01.440523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:01.523 [2024-04-24 01:57:01.440672] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:01.523 [2024-04-24 01:57:01.440789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.523 01:57:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.806 01:57:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:01.806 "name": "Existed_Raid", 00:27:01.806 "uuid": "3a9fbe9c-5fe3-495a-a726-c4db558d7ed8", 00:27:01.806 "strip_size_kb": 64, 00:27:01.806 "state": "configuring", 00:27:01.806 "raid_level": "concat", 00:27:01.806 "superblock": true, 00:27:01.806 "num_base_bdevs": 3, 00:27:01.806 "num_base_bdevs_discovered": 0, 00:27:01.806 "num_base_bdevs_operational": 3, 00:27:01.806 "base_bdevs_list": [ 00:27:01.806 { 00:27:01.806 "name": "BaseBdev1", 00:27:01.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.806 "is_configured": false, 00:27:01.806 "data_offset": 0, 00:27:01.806 "data_size": 0 00:27:01.806 }, 00:27:01.806 { 00:27:01.806 "name": "BaseBdev2", 00:27:01.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.806 "is_configured": false, 00:27:01.806 "data_offset": 0, 00:27:01.806 "data_size": 0 00:27:01.806 }, 00:27:01.806 { 00:27:01.806 "name": "BaseBdev3", 00:27:01.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.806 "is_configured": false, 00:27:01.806 "data_offset": 0, 00:27:01.806 "data_size": 0 00:27:01.806 } 00:27:01.806 ] 00:27:01.806 }' 00:27:01.806 01:57:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:01.806 01:57:01 -- common/autotest_common.sh@10 -- # set +x 00:27:02.375 01:57:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:02.633 [2024-04-24 01:57:02.503883] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:02.633 [2024-04-24 01:57:02.504099] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:27:02.633 01:57:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:02.894 [2024-04-24 01:57:02.747990] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:02.894 [2024-04-24 01:57:02.748295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:02.894 [2024-04-24 01:57:02.748479] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.894 [2024-04-24 01:57:02.748561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.894 [2024-04-24 01:57:02.748831] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:02.894 [2024-04-24 01:57:02.748927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:02.894 01:57:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:03.150 [2024-04-24 01:57:03.016587] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:03.150 BaseBdev1 00:27:03.150 01:57:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:03.150 01:57:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:03.150 01:57:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:03.150 01:57:03 -- common/autotest_common.sh@887 -- # local i 00:27:03.150 01:57:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:03.150 01:57:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:03.150 01:57:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:03.408 01:57:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:03.408 [ 00:27:03.408 { 00:27:03.408 "name": "BaseBdev1", 00:27:03.408 "aliases": [ 00:27:03.409 "c47b9f60-4f93-4837-a744-db496b8b31bf" 00:27:03.409 ], 00:27:03.409 "product_name": "Malloc disk", 00:27:03.409 "block_size": 512, 00:27:03.409 "num_blocks": 65536, 00:27:03.409 "uuid": "c47b9f60-4f93-4837-a744-db496b8b31bf", 00:27:03.409 "assigned_rate_limits": { 00:27:03.409 "rw_ios_per_sec": 0, 00:27:03.409 "rw_mbytes_per_sec": 0, 00:27:03.409 "r_mbytes_per_sec": 0, 00:27:03.409 "w_mbytes_per_sec": 0 00:27:03.409 }, 00:27:03.409 "claimed": true, 00:27:03.409 "claim_type": "exclusive_write", 00:27:03.409 "zoned": false, 00:27:03.409 "supported_io_types": { 00:27:03.409 "read": true, 00:27:03.409 "write": true, 00:27:03.409 "unmap": true, 00:27:03.409 "write_zeroes": true, 00:27:03.409 "flush": true, 00:27:03.409 "reset": true, 00:27:03.409 "compare": false, 00:27:03.409 "compare_and_write": false, 00:27:03.409 "abort": true, 00:27:03.409 "nvme_admin": false, 00:27:03.409 "nvme_io": false 00:27:03.409 }, 00:27:03.409 "memory_domains": [ 00:27:03.409 { 00:27:03.409 "dma_device_id": "system", 00:27:03.409 "dma_device_type": 1 00:27:03.409 }, 00:27:03.409 { 00:27:03.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.409 "dma_device_type": 2 00:27:03.409 } 00:27:03.409 ], 00:27:03.409 "driver_specific": {} 00:27:03.409 } 00:27:03.409 ] 00:27:03.666 01:57:03 -- common/autotest_common.sh@893 -- # return 0 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.666 01:57:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.925 01:57:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:03.925 "name": "Existed_Raid", 00:27:03.925 "uuid": "901ed0d8-4c49-4192-9ad6-1e534d9a14f6", 00:27:03.925 "strip_size_kb": 64, 00:27:03.925 "state": "configuring", 00:27:03.925 "raid_level": "concat", 00:27:03.925 "superblock": true, 00:27:03.925 "num_base_bdevs": 3, 00:27:03.925 "num_base_bdevs_discovered": 1, 00:27:03.925 "num_base_bdevs_operational": 3, 00:27:03.925 "base_bdevs_list": [ 00:27:03.925 { 00:27:03.925 "name": "BaseBdev1", 00:27:03.925 "uuid": "c47b9f60-4f93-4837-a744-db496b8b31bf", 00:27:03.925 "is_configured": true, 00:27:03.925 "data_offset": 2048, 00:27:03.925 "data_size": 63488 00:27:03.925 }, 00:27:03.925 { 00:27:03.925 "name": "BaseBdev2", 00:27:03.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.925 "is_configured": false, 00:27:03.925 "data_offset": 0, 00:27:03.925 "data_size": 0 00:27:03.925 }, 00:27:03.925 { 00:27:03.925 "name": "BaseBdev3", 00:27:03.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.925 "is_configured": false, 00:27:03.925 "data_offset": 0, 00:27:03.925 "data_size": 0 00:27:03.925 } 00:27:03.925 ] 00:27:03.925 }' 00:27:03.925 01:57:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:03.925 01:57:03 -- common/autotest_common.sh@10 -- # set +x 00:27:04.489 01:57:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:04.747 [2024-04-24 01:57:04.705112] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:04.747 [2024-04-24 01:57:04.705352] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:27:04.747 01:57:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:27:04.747 01:57:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:05.312 01:57:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:05.571 BaseBdev1 00:27:05.571 01:57:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:27:05.571 01:57:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:05.571 01:57:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:05.571 01:57:05 -- common/autotest_common.sh@887 -- # local i 00:27:05.571 01:57:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:05.571 01:57:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:05.571 01:57:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:05.828 01:57:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:05.828 [ 00:27:05.828 { 00:27:05.828 "name": "BaseBdev1", 00:27:05.828 "aliases": [ 00:27:05.828 "4f2fd833-e49f-47aa-9df0-f8ce612e7bc2" 00:27:05.828 ], 00:27:05.828 "product_name": "Malloc disk", 00:27:05.828 "block_size": 512, 00:27:05.828 "num_blocks": 65536, 00:27:05.828 "uuid": "4f2fd833-e49f-47aa-9df0-f8ce612e7bc2", 00:27:05.828 "assigned_rate_limits": { 00:27:05.828 "rw_ios_per_sec": 0, 00:27:05.828 "rw_mbytes_per_sec": 0, 00:27:05.828 "r_mbytes_per_sec": 0, 00:27:05.828 "w_mbytes_per_sec": 0 00:27:05.828 }, 00:27:05.828 "claimed": false, 00:27:05.828 "zoned": false, 00:27:05.828 "supported_io_types": { 00:27:05.828 "read": true, 00:27:05.828 "write": true, 00:27:05.828 "unmap": true, 00:27:05.828 "write_zeroes": true, 00:27:05.828 "flush": true, 00:27:05.828 "reset": true, 00:27:05.828 "compare": false, 00:27:05.828 "compare_and_write": false, 00:27:05.828 "abort": true, 00:27:05.828 "nvme_admin": false, 00:27:05.828 "nvme_io": false 00:27:05.828 }, 00:27:05.828 "memory_domains": [ 00:27:05.828 { 00:27:05.828 "dma_device_id": "system", 00:27:05.828 "dma_device_type": 1 00:27:05.828 }, 00:27:05.828 { 00:27:05.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.829 "dma_device_type": 2 00:27:05.829 } 00:27:05.829 ], 00:27:05.829 "driver_specific": {} 00:27:05.829 } 00:27:05.829 ] 00:27:05.829 01:57:05 -- common/autotest_common.sh@893 -- # return 0 00:27:05.829 01:57:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:06.086 [2024-04-24 01:57:06.073758] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:06.086 [2024-04-24 01:57:06.075869] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:06.086 [2024-04-24 01:57:06.075937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:06.086 [2024-04-24 01:57:06.075947] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:06.087 [2024-04-24 01:57:06.075971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.087 01:57:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.346 01:57:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:06.346 "name": "Existed_Raid", 00:27:06.346 "uuid": "23255a9a-2dba-4a82-bb66-fe7372e1be22", 00:27:06.346 "strip_size_kb": 64, 00:27:06.346 "state": "configuring", 00:27:06.346 "raid_level": "concat", 00:27:06.346 "superblock": true, 00:27:06.346 "num_base_bdevs": 3, 00:27:06.346 "num_base_bdevs_discovered": 1, 00:27:06.346 "num_base_bdevs_operational": 3, 00:27:06.346 "base_bdevs_list": [ 00:27:06.346 { 00:27:06.346 "name": "BaseBdev1", 00:27:06.346 "uuid": "4f2fd833-e49f-47aa-9df0-f8ce612e7bc2", 00:27:06.346 "is_configured": true, 00:27:06.346 "data_offset": 2048, 00:27:06.346 "data_size": 63488 00:27:06.346 }, 00:27:06.346 { 00:27:06.346 "name": "BaseBdev2", 00:27:06.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.347 "is_configured": false, 00:27:06.347 "data_offset": 0, 00:27:06.347 "data_size": 0 00:27:06.347 }, 00:27:06.347 { 00:27:06.347 "name": "BaseBdev3", 00:27:06.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.347 "is_configured": false, 00:27:06.347 "data_offset": 0, 00:27:06.347 "data_size": 0 00:27:06.347 } 00:27:06.347 ] 00:27:06.347 }' 00:27:06.347 01:57:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:06.347 01:57:06 -- common/autotest_common.sh@10 -- # set +x 00:27:06.912 01:57:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:07.170 [2024-04-24 01:57:07.064647] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:07.170 BaseBdev2 00:27:07.170 01:57:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:07.170 01:57:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:07.170 01:57:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:07.170 01:57:07 -- common/autotest_common.sh@887 -- # local i 00:27:07.170 01:57:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:07.170 01:57:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:07.170 01:57:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:07.426 01:57:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:07.718 [ 00:27:07.718 { 00:27:07.718 "name": "BaseBdev2", 00:27:07.718 "aliases": [ 00:27:07.718 "adaa313a-1552-4c5f-88f9-364497acfc24" 00:27:07.718 ], 00:27:07.718 "product_name": "Malloc disk", 00:27:07.718 "block_size": 512, 00:27:07.718 "num_blocks": 65536, 00:27:07.718 "uuid": "adaa313a-1552-4c5f-88f9-364497acfc24", 00:27:07.718 "assigned_rate_limits": { 00:27:07.718 "rw_ios_per_sec": 0, 00:27:07.718 "rw_mbytes_per_sec": 0, 00:27:07.718 "r_mbytes_per_sec": 0, 00:27:07.718 "w_mbytes_per_sec": 0 00:27:07.718 }, 00:27:07.718 "claimed": true, 00:27:07.718 "claim_type": "exclusive_write", 00:27:07.718 "zoned": false, 00:27:07.718 "supported_io_types": { 00:27:07.718 "read": true, 00:27:07.718 "write": true, 00:27:07.718 "unmap": true, 00:27:07.718 "write_zeroes": true, 00:27:07.718 "flush": true, 00:27:07.718 "reset": true, 00:27:07.718 "compare": false, 00:27:07.718 "compare_and_write": false, 00:27:07.718 "abort": true, 00:27:07.718 "nvme_admin": false, 00:27:07.718 "nvme_io": false 00:27:07.718 }, 00:27:07.718 "memory_domains": [ 00:27:07.718 { 00:27:07.718 "dma_device_id": "system", 00:27:07.718 "dma_device_type": 1 00:27:07.718 }, 00:27:07.718 { 00:27:07.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.718 "dma_device_type": 2 00:27:07.718 } 00:27:07.718 ], 00:27:07.718 "driver_specific": {} 00:27:07.718 } 00:27:07.718 ] 00:27:07.718 01:57:07 -- common/autotest_common.sh@893 -- # return 0 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:07.718 "name": "Existed_Raid", 00:27:07.718 "uuid": "23255a9a-2dba-4a82-bb66-fe7372e1be22", 00:27:07.718 "strip_size_kb": 64, 00:27:07.718 "state": "configuring", 00:27:07.718 "raid_level": "concat", 00:27:07.718 "superblock": true, 00:27:07.718 "num_base_bdevs": 3, 00:27:07.718 "num_base_bdevs_discovered": 2, 00:27:07.718 "num_base_bdevs_operational": 3, 00:27:07.718 "base_bdevs_list": [ 00:27:07.718 { 00:27:07.718 "name": "BaseBdev1", 00:27:07.718 "uuid": "4f2fd833-e49f-47aa-9df0-f8ce612e7bc2", 00:27:07.718 "is_configured": true, 00:27:07.718 "data_offset": 2048, 00:27:07.718 "data_size": 63488 00:27:07.718 }, 00:27:07.718 { 00:27:07.718 "name": "BaseBdev2", 00:27:07.718 "uuid": "adaa313a-1552-4c5f-88f9-364497acfc24", 00:27:07.718 "is_configured": true, 00:27:07.718 "data_offset": 2048, 00:27:07.718 "data_size": 63488 00:27:07.718 }, 00:27:07.718 { 00:27:07.718 "name": "BaseBdev3", 00:27:07.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.718 "is_configured": false, 00:27:07.718 "data_offset": 0, 00:27:07.718 "data_size": 0 00:27:07.718 } 00:27:07.718 ] 00:27:07.718 }' 00:27:07.718 01:57:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:07.718 01:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:08.283 01:57:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:08.539 [2024-04-24 01:57:08.623064] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:08.539 [2024-04-24 01:57:08.623291] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:08.539 [2024-04-24 01:57:08.623304] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:08.540 [2024-04-24 01:57:08.623463] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:27:08.540 [2024-04-24 01:57:08.623817] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:08.540 [2024-04-24 01:57:08.623843] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:27:08.540 [2024-04-24 01:57:08.623977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.540 BaseBdev3 00:27:08.798 01:57:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:08.798 01:57:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:08.798 01:57:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:08.798 01:57:08 -- common/autotest_common.sh@887 -- # local i 00:27:08.798 01:57:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:08.798 01:57:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:08.798 01:57:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:09.056 01:57:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:09.056 [ 00:27:09.056 { 00:27:09.056 "name": "BaseBdev3", 00:27:09.056 "aliases": [ 00:27:09.056 "9f84aa7e-d330-4966-9dc4-b9bcdcb5e6e6" 00:27:09.056 ], 00:27:09.056 "product_name": "Malloc disk", 00:27:09.056 "block_size": 512, 00:27:09.056 "num_blocks": 65536, 00:27:09.056 "uuid": "9f84aa7e-d330-4966-9dc4-b9bcdcb5e6e6", 00:27:09.056 "assigned_rate_limits": { 00:27:09.056 "rw_ios_per_sec": 0, 00:27:09.056 "rw_mbytes_per_sec": 0, 00:27:09.056 "r_mbytes_per_sec": 0, 00:27:09.056 "w_mbytes_per_sec": 0 00:27:09.056 }, 00:27:09.056 "claimed": true, 00:27:09.056 "claim_type": "exclusive_write", 00:27:09.056 "zoned": false, 00:27:09.056 "supported_io_types": { 00:27:09.056 "read": true, 00:27:09.056 "write": true, 00:27:09.056 "unmap": true, 00:27:09.056 "write_zeroes": true, 00:27:09.056 "flush": true, 00:27:09.056 "reset": true, 00:27:09.056 "compare": false, 00:27:09.056 "compare_and_write": false, 00:27:09.056 "abort": true, 00:27:09.056 "nvme_admin": false, 00:27:09.056 "nvme_io": false 00:27:09.056 }, 00:27:09.056 "memory_domains": [ 00:27:09.056 { 00:27:09.056 "dma_device_id": "system", 00:27:09.056 "dma_device_type": 1 00:27:09.056 }, 00:27:09.056 { 00:27:09.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.056 "dma_device_type": 2 00:27:09.056 } 00:27:09.056 ], 00:27:09.056 "driver_specific": {} 00:27:09.056 } 00:27:09.056 ] 00:27:09.315 01:57:09 -- common/autotest_common.sh@893 -- # return 0 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:09.315 01:57:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:09.315 "name": "Existed_Raid", 00:27:09.315 "uuid": "23255a9a-2dba-4a82-bb66-fe7372e1be22", 00:27:09.315 "strip_size_kb": 64, 00:27:09.315 "state": "online", 00:27:09.315 "raid_level": "concat", 00:27:09.315 "superblock": true, 00:27:09.315 "num_base_bdevs": 3, 00:27:09.315 "num_base_bdevs_discovered": 3, 00:27:09.315 "num_base_bdevs_operational": 3, 00:27:09.315 "base_bdevs_list": [ 00:27:09.315 { 00:27:09.315 "name": "BaseBdev1", 00:27:09.315 "uuid": "4f2fd833-e49f-47aa-9df0-f8ce612e7bc2", 00:27:09.315 "is_configured": true, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 }, 00:27:09.315 { 00:27:09.315 "name": "BaseBdev2", 00:27:09.315 "uuid": "adaa313a-1552-4c5f-88f9-364497acfc24", 00:27:09.315 "is_configured": true, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 }, 00:27:09.315 { 00:27:09.315 "name": "BaseBdev3", 00:27:09.315 "uuid": "9f84aa7e-d330-4966-9dc4-b9bcdcb5e6e6", 00:27:09.315 "is_configured": true, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 } 00:27:09.315 ] 00:27:09.315 }' 00:27:09.573 01:57:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:09.573 01:57:09 -- common/autotest_common.sh@10 -- # set +x 00:27:10.137 01:57:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:10.137 [2024-04-24 01:57:10.191543] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:10.137 [2024-04-24 01:57:10.191590] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.137 [2024-04-24 01:57:10.191644] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.395 01:57:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.652 01:57:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:10.652 "name": "Existed_Raid", 00:27:10.652 "uuid": "23255a9a-2dba-4a82-bb66-fe7372e1be22", 00:27:10.652 "strip_size_kb": 64, 00:27:10.652 "state": "offline", 00:27:10.652 "raid_level": "concat", 00:27:10.652 "superblock": true, 00:27:10.652 "num_base_bdevs": 3, 00:27:10.652 "num_base_bdevs_discovered": 2, 00:27:10.652 "num_base_bdevs_operational": 2, 00:27:10.652 "base_bdevs_list": [ 00:27:10.652 { 00:27:10.652 "name": null, 00:27:10.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.652 "is_configured": false, 00:27:10.652 "data_offset": 2048, 00:27:10.652 "data_size": 63488 00:27:10.652 }, 00:27:10.652 { 00:27:10.652 "name": "BaseBdev2", 00:27:10.652 "uuid": "adaa313a-1552-4c5f-88f9-364497acfc24", 00:27:10.652 "is_configured": true, 00:27:10.652 "data_offset": 2048, 00:27:10.652 "data_size": 63488 00:27:10.652 }, 00:27:10.652 { 00:27:10.652 "name": "BaseBdev3", 00:27:10.652 "uuid": "9f84aa7e-d330-4966-9dc4-b9bcdcb5e6e6", 00:27:10.652 "is_configured": true, 00:27:10.652 "data_offset": 2048, 00:27:10.652 "data_size": 63488 00:27:10.652 } 00:27:10.652 ] 00:27:10.652 }' 00:27:10.652 01:57:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:10.652 01:57:10 -- common/autotest_common.sh@10 -- # set +x 00:27:11.219 01:57:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:11.219 01:57:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:11.219 01:57:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.219 01:57:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:11.477 01:57:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:11.477 01:57:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:11.477 01:57:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:11.734 [2024-04-24 01:57:11.576449] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:11.734 01:57:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:11.734 01:57:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:11.734 01:57:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.734 01:57:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:11.992 01:57:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:11.992 01:57:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:11.992 01:57:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:12.252 [2024-04-24 01:57:12.155939] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:12.252 [2024-04-24 01:57:12.156011] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:27:12.252 01:57:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:12.252 01:57:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:12.252 01:57:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.252 01:57:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:12.511 01:57:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:12.511 01:57:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:12.511 01:57:12 -- bdev/bdev_raid.sh@287 -- # killprocess 124788 00:27:12.511 01:57:12 -- common/autotest_common.sh@936 -- # '[' -z 124788 ']' 00:27:12.511 01:57:12 -- common/autotest_common.sh@940 -- # kill -0 124788 00:27:12.511 01:57:12 -- common/autotest_common.sh@941 -- # uname 00:27:12.511 01:57:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.511 01:57:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124788 00:27:12.511 01:57:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:12.511 01:57:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:12.511 killing process with pid 124788 00:27:12.512 01:57:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124788' 00:27:12.512 01:57:12 -- common/autotest_common.sh@955 -- # kill 124788 00:27:12.512 [2024-04-24 01:57:12.539241] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:12.512 [2024-04-24 01:57:12.539391] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:12.512 01:57:12 -- common/autotest_common.sh@960 -- # wait 124788 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:14.413 00:27:14.413 real 0m13.882s 00:27:14.413 user 0m23.665s 00:27:14.413 sys 0m1.798s 00:27:14.413 01:57:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:14.413 01:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:14.413 ************************************ 00:27:14.413 END TEST raid_state_function_test_sb 00:27:14.413 ************************************ 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:27:14.413 01:57:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:27:14.413 01:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:14.413 01:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:14.413 ************************************ 00:27:14.413 START TEST raid_superblock_test 00:27:14.413 ************************************ 00:27:14.413 01:57:14 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 3 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=125197 00:27:14.413 01:57:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125197 /var/tmp/spdk-raid.sock 00:27:14.413 01:57:14 -- common/autotest_common.sh@817 -- # '[' -z 125197 ']' 00:27:14.413 01:57:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:14.413 01:57:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:14.413 01:57:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:14.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:14.413 01:57:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:14.413 01:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:14.413 [2024-04-24 01:57:14.227729] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:27:14.413 [2024-04-24 01:57:14.227959] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125197 ] 00:27:14.413 [2024-04-24 01:57:14.405598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.672 [2024-04-24 01:57:14.635157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.932 [2024-04-24 01:57:14.867137] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.191 01:57:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:15.191 01:57:15 -- common/autotest_common.sh@850 -- # return 0 00:27:15.191 01:57:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:27:15.191 01:57:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:15.191 01:57:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:27:15.191 01:57:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:27:15.191 01:57:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:15.192 01:57:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:15.192 01:57:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:15.192 01:57:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:15.192 01:57:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:15.450 malloc1 00:27:15.450 01:57:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:15.709 [2024-04-24 01:57:15.634126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:15.709 [2024-04-24 01:57:15.634235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:15.709 [2024-04-24 01:57:15.634270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:15.709 [2024-04-24 01:57:15.634318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:15.709 [2024-04-24 01:57:15.637129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:15.709 [2024-04-24 01:57:15.637201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:15.709 pt1 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:15.709 01:57:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:15.968 malloc2 00:27:15.968 01:57:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:16.230 [2024-04-24 01:57:16.110820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:16.230 [2024-04-24 01:57:16.110926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.230 [2024-04-24 01:57:16.110971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:16.230 [2024-04-24 01:57:16.111026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.230 [2024-04-24 01:57:16.113666] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.230 [2024-04-24 01:57:16.113744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:16.230 pt2 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:16.230 01:57:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:16.488 malloc3 00:27:16.489 01:57:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:16.756 [2024-04-24 01:57:16.589068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:16.756 [2024-04-24 01:57:16.589175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.756 [2024-04-24 01:57:16.589221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:16.756 [2024-04-24 01:57:16.589272] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.756 [2024-04-24 01:57:16.591949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.756 [2024-04-24 01:57:16.592017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:16.756 pt3 00:27:16.756 01:57:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:16.756 01:57:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:16.756 01:57:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:27:16.756 [2024-04-24 01:57:16.829188] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:16.756 [2024-04-24 01:57:16.831751] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:16.756 [2024-04-24 01:57:16.831840] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:16.756 [2024-04-24 01:57:16.832056] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:27:16.756 [2024-04-24 01:57:16.832072] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:16.756 [2024-04-24 01:57:16.832240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:16.756 [2024-04-24 01:57:16.832643] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:27:16.756 [2024-04-24 01:57:16.832663] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:27:16.756 [2024-04-24 01:57:16.832826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.039 01:57:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.039 01:57:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:17.039 "name": "raid_bdev1", 00:27:17.039 "uuid": "5e0aefd6-ed96-49e2-8cc8-039e20f9d77a", 00:27:17.039 "strip_size_kb": 64, 00:27:17.039 "state": "online", 00:27:17.039 "raid_level": "concat", 00:27:17.039 "superblock": true, 00:27:17.039 "num_base_bdevs": 3, 00:27:17.039 "num_base_bdevs_discovered": 3, 00:27:17.039 "num_base_bdevs_operational": 3, 00:27:17.039 "base_bdevs_list": [ 00:27:17.039 { 00:27:17.039 "name": "pt1", 00:27:17.039 "uuid": "072af2df-5e7e-56ad-81fd-5adc5a386762", 00:27:17.039 "is_configured": true, 00:27:17.039 "data_offset": 2048, 00:27:17.039 "data_size": 63488 00:27:17.039 }, 00:27:17.039 { 00:27:17.039 "name": "pt2", 00:27:17.039 "uuid": "821403dc-9bca-591e-bb2d-24d3d0fcc124", 00:27:17.039 "is_configured": true, 00:27:17.039 "data_offset": 2048, 00:27:17.039 "data_size": 63488 00:27:17.039 }, 00:27:17.039 { 00:27:17.039 "name": "pt3", 00:27:17.039 "uuid": "f814c775-6373-5f46-a332-9dc5771529bb", 00:27:17.039 "is_configured": true, 00:27:17.039 "data_offset": 2048, 00:27:17.039 "data_size": 63488 00:27:17.039 } 00:27:17.039 ] 00:27:17.039 }' 00:27:17.039 01:57:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:17.039 01:57:17 -- common/autotest_common.sh@10 -- # set +x 00:27:17.604 01:57:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:17.604 01:57:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:27:17.862 [2024-04-24 01:57:17.837560] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:17.862 01:57:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5e0aefd6-ed96-49e2-8cc8-039e20f9d77a 00:27:17.862 01:57:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 5e0aefd6-ed96-49e2-8cc8-039e20f9d77a ']' 00:27:17.862 01:57:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:18.118 [2024-04-24 01:57:18.037325] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.118 [2024-04-24 01:57:18.037365] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.118 [2024-04-24 01:57:18.037447] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.118 [2024-04-24 01:57:18.037517] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.118 [2024-04-24 01:57:18.037527] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:18.118 01:57:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.118 01:57:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:27:18.376 01:57:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:27:18.376 01:57:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:27:18.376 01:57:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.376 01:57:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:18.633 01:57:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.633 01:57:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:18.891 01:57:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.891 01:57:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:19.151 01:57:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:19.151 01:57:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:19.151 01:57:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:27:19.151 01:57:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:19.151 01:57:19 -- common/autotest_common.sh@638 -- # local es=0 00:27:19.151 01:57:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:19.151 01:57:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.151 01:57:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:19.151 01:57:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.151 01:57:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:19.151 01:57:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.151 01:57:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:19.151 01:57:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.151 01:57:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:19.151 01:57:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:19.413 [2024-04-24 01:57:19.401586] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:19.413 [2024-04-24 01:57:19.403718] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:19.413 [2024-04-24 01:57:19.403771] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:19.413 [2024-04-24 01:57:19.403818] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:27:19.413 [2024-04-24 01:57:19.403886] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:27:19.413 [2024-04-24 01:57:19.403925] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:27:19.413 [2024-04-24 01:57:19.403968] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.413 [2024-04-24 01:57:19.403978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:27:19.413 request: 00:27:19.413 { 00:27:19.413 "name": "raid_bdev1", 00:27:19.413 "raid_level": "concat", 00:27:19.413 "base_bdevs": [ 00:27:19.413 "malloc1", 00:27:19.413 "malloc2", 00:27:19.413 "malloc3" 00:27:19.413 ], 00:27:19.413 "superblock": false, 00:27:19.413 "strip_size_kb": 64, 00:27:19.413 "method": "bdev_raid_create", 00:27:19.413 "req_id": 1 00:27:19.413 } 00:27:19.413 Got JSON-RPC error response 00:27:19.413 response: 00:27:19.413 { 00:27:19.413 "code": -17, 00:27:19.413 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:19.413 } 00:27:19.413 01:57:19 -- common/autotest_common.sh@641 -- # es=1 00:27:19.413 01:57:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:19.413 01:57:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:19.413 01:57:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:19.413 01:57:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.413 01:57:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:27:19.671 01:57:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:27:19.671 01:57:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:27:19.671 01:57:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:19.929 [2024-04-24 01:57:19.785574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:19.929 [2024-04-24 01:57:19.785678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.929 [2024-04-24 01:57:19.785715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:19.929 [2024-04-24 01:57:19.785739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.929 [2024-04-24 01:57:19.788088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.929 [2024-04-24 01:57:19.788157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:19.930 [2024-04-24 01:57:19.788293] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:19.930 [2024-04-24 01:57:19.788357] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:19.930 pt1 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.930 01:57:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.187 01:57:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:20.187 "name": "raid_bdev1", 00:27:20.187 "uuid": "5e0aefd6-ed96-49e2-8cc8-039e20f9d77a", 00:27:20.187 "strip_size_kb": 64, 00:27:20.187 "state": "configuring", 00:27:20.187 "raid_level": "concat", 00:27:20.187 "superblock": true, 00:27:20.187 "num_base_bdevs": 3, 00:27:20.187 "num_base_bdevs_discovered": 1, 00:27:20.187 "num_base_bdevs_operational": 3, 00:27:20.187 "base_bdevs_list": [ 00:27:20.187 { 00:27:20.187 "name": "pt1", 00:27:20.187 "uuid": "072af2df-5e7e-56ad-81fd-5adc5a386762", 00:27:20.187 "is_configured": true, 00:27:20.187 "data_offset": 2048, 00:27:20.187 "data_size": 63488 00:27:20.187 }, 00:27:20.187 { 00:27:20.187 "name": null, 00:27:20.187 "uuid": "821403dc-9bca-591e-bb2d-24d3d0fcc124", 00:27:20.187 "is_configured": false, 00:27:20.187 "data_offset": 2048, 00:27:20.187 "data_size": 63488 00:27:20.187 }, 00:27:20.187 { 00:27:20.187 "name": null, 00:27:20.187 "uuid": "f814c775-6373-5f46-a332-9dc5771529bb", 00:27:20.187 "is_configured": false, 00:27:20.187 "data_offset": 2048, 00:27:20.187 "data_size": 63488 00:27:20.187 } 00:27:20.187 ] 00:27:20.187 }' 00:27:20.187 01:57:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:20.187 01:57:20 -- common/autotest_common.sh@10 -- # set +x 00:27:20.753 01:57:20 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:27:20.753 01:57:20 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:21.011 [2024-04-24 01:57:20.913851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:21.011 [2024-04-24 01:57:20.913960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.011 [2024-04-24 01:57:20.914012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:21.011 [2024-04-24 01:57:20.914046] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.011 [2024-04-24 01:57:20.914523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.011 [2024-04-24 01:57:20.914561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:21.011 [2024-04-24 01:57:20.914693] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:21.011 [2024-04-24 01:57:20.914715] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:21.011 pt2 00:27:21.011 01:57:20 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:21.269 [2024-04-24 01:57:21.234004] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.269 01:57:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.527 01:57:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:21.527 "name": "raid_bdev1", 00:27:21.527 "uuid": "5e0aefd6-ed96-49e2-8cc8-039e20f9d77a", 00:27:21.527 "strip_size_kb": 64, 00:27:21.527 "state": "configuring", 00:27:21.527 "raid_level": "concat", 00:27:21.527 "superblock": true, 00:27:21.527 "num_base_bdevs": 3, 00:27:21.527 "num_base_bdevs_discovered": 1, 00:27:21.527 "num_base_bdevs_operational": 3, 00:27:21.527 "base_bdevs_list": [ 00:27:21.527 { 00:27:21.527 "name": "pt1", 00:27:21.527 "uuid": "072af2df-5e7e-56ad-81fd-5adc5a386762", 00:27:21.527 "is_configured": true, 00:27:21.527 "data_offset": 2048, 00:27:21.527 "data_size": 63488 00:27:21.527 }, 00:27:21.527 { 00:27:21.527 "name": null, 00:27:21.527 "uuid": "821403dc-9bca-591e-bb2d-24d3d0fcc124", 00:27:21.527 "is_configured": false, 00:27:21.527 "data_offset": 2048, 00:27:21.527 "data_size": 63488 00:27:21.527 }, 00:27:21.527 { 00:27:21.527 "name": null, 00:27:21.527 "uuid": "f814c775-6373-5f46-a332-9dc5771529bb", 00:27:21.527 "is_configured": false, 00:27:21.527 "data_offset": 2048, 00:27:21.527 "data_size": 63488 00:27:21.527 } 00:27:21.527 ] 00:27:21.527 }' 00:27:21.527 01:57:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:21.527 01:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:22.460 [2024-04-24 01:57:22.454271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:22.460 [2024-04-24 01:57:22.454401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.460 [2024-04-24 01:57:22.454441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:22.460 [2024-04-24 01:57:22.454471] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.460 [2024-04-24 01:57:22.454948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.460 [2024-04-24 01:57:22.454996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:22.460 [2024-04-24 01:57:22.455140] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:22.460 [2024-04-24 01:57:22.455163] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:22.460 pt2 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:22.460 01:57:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:22.718 [2024-04-24 01:57:22.730292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:22.718 [2024-04-24 01:57:22.730392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.718 [2024-04-24 01:57:22.730431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:22.718 [2024-04-24 01:57:22.730462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.718 [2024-04-24 01:57:22.730961] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.718 [2024-04-24 01:57:22.731008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:22.718 [2024-04-24 01:57:22.731149] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:27:22.718 [2024-04-24 01:57:22.731172] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:22.718 [2024-04-24 01:57:22.731294] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:22.718 [2024-04-24 01:57:22.731303] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:22.718 [2024-04-24 01:57:22.731437] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:22.718 [2024-04-24 01:57:22.731772] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:22.718 [2024-04-24 01:57:22.731792] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:27:22.718 [2024-04-24 01:57:22.731950] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.718 pt3 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.718 01:57:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.975 01:57:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:22.975 "name": "raid_bdev1", 00:27:22.975 "uuid": "5e0aefd6-ed96-49e2-8cc8-039e20f9d77a", 00:27:22.975 "strip_size_kb": 64, 00:27:22.975 "state": "online", 00:27:22.975 "raid_level": "concat", 00:27:22.975 "superblock": true, 00:27:22.975 "num_base_bdevs": 3, 00:27:22.975 "num_base_bdevs_discovered": 3, 00:27:22.975 "num_base_bdevs_operational": 3, 00:27:22.975 "base_bdevs_list": [ 00:27:22.975 { 00:27:22.975 "name": "pt1", 00:27:22.975 "uuid": "072af2df-5e7e-56ad-81fd-5adc5a386762", 00:27:22.975 "is_configured": true, 00:27:22.975 "data_offset": 2048, 00:27:22.975 "data_size": 63488 00:27:22.975 }, 00:27:22.975 { 00:27:22.975 "name": "pt2", 00:27:22.975 "uuid": "821403dc-9bca-591e-bb2d-24d3d0fcc124", 00:27:22.975 "is_configured": true, 00:27:22.975 "data_offset": 2048, 00:27:22.975 "data_size": 63488 00:27:22.975 }, 00:27:22.975 { 00:27:22.975 "name": "pt3", 00:27:22.975 "uuid": "f814c775-6373-5f46-a332-9dc5771529bb", 00:27:22.975 "is_configured": true, 00:27:22.975 "data_offset": 2048, 00:27:22.975 "data_size": 63488 00:27:22.975 } 00:27:22.975 ] 00:27:22.975 }' 00:27:22.975 01:57:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:22.975 01:57:23 -- common/autotest_common.sh@10 -- # set +x 00:27:23.634 01:57:23 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:23.634 01:57:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:27:23.893 [2024-04-24 01:57:23.894804] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.893 01:57:23 -- bdev/bdev_raid.sh@430 -- # '[' 5e0aefd6-ed96-49e2-8cc8-039e20f9d77a '!=' 5e0aefd6-ed96-49e2-8cc8-039e20f9d77a ']' 00:27:23.893 01:57:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:27:23.893 01:57:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:23.893 01:57:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:27:23.893 01:57:23 -- bdev/bdev_raid.sh@511 -- # killprocess 125197 00:27:23.893 01:57:23 -- common/autotest_common.sh@936 -- # '[' -z 125197 ']' 00:27:23.893 01:57:23 -- common/autotest_common.sh@940 -- # kill -0 125197 00:27:23.893 01:57:23 -- common/autotest_common.sh@941 -- # uname 00:27:23.893 01:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.893 01:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125197 00:27:23.893 01:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:23.893 01:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:23.893 01:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125197' 00:27:23.893 killing process with pid 125197 00:27:23.893 01:57:23 -- common/autotest_common.sh@955 -- # kill 125197 00:27:23.893 [2024-04-24 01:57:23.947188] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:23.893 [2024-04-24 01:57:23.947275] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:23.893 [2024-04-24 01:57:23.947336] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:23.893 [2024-04-24 01:57:23.947349] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:27:23.893 01:57:23 -- common/autotest_common.sh@960 -- # wait 125197 00:27:24.460 [2024-04-24 01:57:24.292506] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:25.839 ************************************ 00:27:25.839 END TEST raid_superblock_test 00:27:25.839 ************************************ 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@513 -- # return 0 00:27:25.839 00:27:25.839 real 0m11.611s 00:27:25.839 user 0m19.462s 00:27:25.839 sys 0m1.602s 00:27:25.839 01:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:25.839 01:57:25 -- common/autotest_common.sh@10 -- # set +x 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:27:25.839 01:57:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:25.839 01:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.839 01:57:25 -- common/autotest_common.sh@10 -- # set +x 00:27:25.839 ************************************ 00:27:25.839 START TEST raid_state_function_test 00:27:25.839 ************************************ 00:27:25.839 01:57:25 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 false 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=125518 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125518' 00:27:25.839 Process raid pid: 125518 00:27:25.839 01:57:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125518 /var/tmp/spdk-raid.sock 00:27:25.839 01:57:25 -- common/autotest_common.sh@817 -- # '[' -z 125518 ']' 00:27:25.839 01:57:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:25.839 01:57:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:25.839 01:57:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:25.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:25.839 01:57:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:25.839 01:57:25 -- common/autotest_common.sh@10 -- # set +x 00:27:25.839 [2024-04-24 01:57:25.919166] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:27:25.839 [2024-04-24 01:57:25.919315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.098 [2024-04-24 01:57:26.084541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.356 [2024-04-24 01:57:26.323122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.614 [2024-04-24 01:57:26.575266] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:26.907 01:57:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:26.907 01:57:26 -- common/autotest_common.sh@850 -- # return 0 00:27:26.908 01:57:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:27.166 [2024-04-24 01:57:27.042296] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:27.166 [2024-04-24 01:57:27.042397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:27.166 [2024-04-24 01:57:27.042409] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:27.166 [2024-04-24 01:57:27.042430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:27.166 [2024-04-24 01:57:27.042437] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:27.166 [2024-04-24 01:57:27.042487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.166 01:57:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.425 01:57:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:27.426 "name": "Existed_Raid", 00:27:27.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.426 "strip_size_kb": 0, 00:27:27.426 "state": "configuring", 00:27:27.426 "raid_level": "raid1", 00:27:27.426 "superblock": false, 00:27:27.426 "num_base_bdevs": 3, 00:27:27.426 "num_base_bdevs_discovered": 0, 00:27:27.426 "num_base_bdevs_operational": 3, 00:27:27.426 "base_bdevs_list": [ 00:27:27.426 { 00:27:27.426 "name": "BaseBdev1", 00:27:27.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.426 "is_configured": false, 00:27:27.426 "data_offset": 0, 00:27:27.426 "data_size": 0 00:27:27.426 }, 00:27:27.426 { 00:27:27.426 "name": "BaseBdev2", 00:27:27.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.426 "is_configured": false, 00:27:27.426 "data_offset": 0, 00:27:27.426 "data_size": 0 00:27:27.426 }, 00:27:27.426 { 00:27:27.426 "name": "BaseBdev3", 00:27:27.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.426 "is_configured": false, 00:27:27.426 "data_offset": 0, 00:27:27.426 "data_size": 0 00:27:27.426 } 00:27:27.426 ] 00:27:27.426 }' 00:27:27.426 01:57:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:27.426 01:57:27 -- common/autotest_common.sh@10 -- # set +x 00:27:27.992 01:57:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:28.250 [2024-04-24 01:57:28.138389] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:28.250 [2024-04-24 01:57:28.138457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:27:28.250 01:57:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:28.250 [2024-04-24 01:57:28.318431] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:28.250 [2024-04-24 01:57:28.318512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:28.250 [2024-04-24 01:57:28.318523] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:28.250 [2024-04-24 01:57:28.318542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:28.250 [2024-04-24 01:57:28.318549] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:28.250 [2024-04-24 01:57:28.318574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:28.509 01:57:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:28.509 [2024-04-24 01:57:28.548010] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:28.509 BaseBdev1 00:27:28.509 01:57:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:28.509 01:57:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:28.509 01:57:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:28.509 01:57:28 -- common/autotest_common.sh@887 -- # local i 00:27:28.509 01:57:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:28.509 01:57:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:28.509 01:57:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:28.768 01:57:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:29.025 [ 00:27:29.025 { 00:27:29.025 "name": "BaseBdev1", 00:27:29.025 "aliases": [ 00:27:29.025 "c8b98f2e-85b0-4fae-b669-513d23b09d43" 00:27:29.025 ], 00:27:29.025 "product_name": "Malloc disk", 00:27:29.025 "block_size": 512, 00:27:29.025 "num_blocks": 65536, 00:27:29.025 "uuid": "c8b98f2e-85b0-4fae-b669-513d23b09d43", 00:27:29.025 "assigned_rate_limits": { 00:27:29.025 "rw_ios_per_sec": 0, 00:27:29.025 "rw_mbytes_per_sec": 0, 00:27:29.025 "r_mbytes_per_sec": 0, 00:27:29.025 "w_mbytes_per_sec": 0 00:27:29.025 }, 00:27:29.026 "claimed": true, 00:27:29.026 "claim_type": "exclusive_write", 00:27:29.026 "zoned": false, 00:27:29.026 "supported_io_types": { 00:27:29.026 "read": true, 00:27:29.026 "write": true, 00:27:29.026 "unmap": true, 00:27:29.026 "write_zeroes": true, 00:27:29.026 "flush": true, 00:27:29.026 "reset": true, 00:27:29.026 "compare": false, 00:27:29.026 "compare_and_write": false, 00:27:29.026 "abort": true, 00:27:29.026 "nvme_admin": false, 00:27:29.026 "nvme_io": false 00:27:29.026 }, 00:27:29.026 "memory_domains": [ 00:27:29.026 { 00:27:29.026 "dma_device_id": "system", 00:27:29.026 "dma_device_type": 1 00:27:29.026 }, 00:27:29.026 { 00:27:29.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.026 "dma_device_type": 2 00:27:29.026 } 00:27:29.026 ], 00:27:29.026 "driver_specific": {} 00:27:29.026 } 00:27:29.026 ] 00:27:29.026 01:57:28 -- common/autotest_common.sh@893 -- # return 0 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.026 01:57:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.284 01:57:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:29.284 "name": "Existed_Raid", 00:27:29.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.284 "strip_size_kb": 0, 00:27:29.284 "state": "configuring", 00:27:29.284 "raid_level": "raid1", 00:27:29.284 "superblock": false, 00:27:29.284 "num_base_bdevs": 3, 00:27:29.284 "num_base_bdevs_discovered": 1, 00:27:29.284 "num_base_bdevs_operational": 3, 00:27:29.284 "base_bdevs_list": [ 00:27:29.284 { 00:27:29.284 "name": "BaseBdev1", 00:27:29.284 "uuid": "c8b98f2e-85b0-4fae-b669-513d23b09d43", 00:27:29.284 "is_configured": true, 00:27:29.284 "data_offset": 0, 00:27:29.284 "data_size": 65536 00:27:29.284 }, 00:27:29.284 { 00:27:29.284 "name": "BaseBdev2", 00:27:29.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.284 "is_configured": false, 00:27:29.284 "data_offset": 0, 00:27:29.284 "data_size": 0 00:27:29.284 }, 00:27:29.284 { 00:27:29.284 "name": "BaseBdev3", 00:27:29.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.284 "is_configured": false, 00:27:29.284 "data_offset": 0, 00:27:29.284 "data_size": 0 00:27:29.284 } 00:27:29.284 ] 00:27:29.284 }' 00:27:29.284 01:57:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:29.284 01:57:29 -- common/autotest_common.sh@10 -- # set +x 00:27:29.850 01:57:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:29.850 [2024-04-24 01:57:29.932405] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:29.850 [2024-04-24 01:57:29.932474] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:27:30.110 01:57:29 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:27:30.111 01:57:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:30.111 [2024-04-24 01:57:30.120506] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:30.111 [2024-04-24 01:57:30.122996] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:30.111 [2024-04-24 01:57:30.123062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:30.111 [2024-04-24 01:57:30.123076] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:30.111 [2024-04-24 01:57:30.123105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.111 01:57:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.370 01:57:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:30.370 "name": "Existed_Raid", 00:27:30.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.370 "strip_size_kb": 0, 00:27:30.370 "state": "configuring", 00:27:30.370 "raid_level": "raid1", 00:27:30.370 "superblock": false, 00:27:30.370 "num_base_bdevs": 3, 00:27:30.370 "num_base_bdevs_discovered": 1, 00:27:30.370 "num_base_bdevs_operational": 3, 00:27:30.370 "base_bdevs_list": [ 00:27:30.370 { 00:27:30.370 "name": "BaseBdev1", 00:27:30.370 "uuid": "c8b98f2e-85b0-4fae-b669-513d23b09d43", 00:27:30.370 "is_configured": true, 00:27:30.370 "data_offset": 0, 00:27:30.370 "data_size": 65536 00:27:30.370 }, 00:27:30.370 { 00:27:30.370 "name": "BaseBdev2", 00:27:30.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.370 "is_configured": false, 00:27:30.370 "data_offset": 0, 00:27:30.370 "data_size": 0 00:27:30.370 }, 00:27:30.370 { 00:27:30.370 "name": "BaseBdev3", 00:27:30.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.370 "is_configured": false, 00:27:30.370 "data_offset": 0, 00:27:30.370 "data_size": 0 00:27:30.370 } 00:27:30.370 ] 00:27:30.370 }' 00:27:30.370 01:57:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:30.370 01:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:31.305 01:57:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:31.305 [2024-04-24 01:57:31.341411] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:31.305 BaseBdev2 00:27:31.305 01:57:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:31.305 01:57:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:31.305 01:57:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:31.305 01:57:31 -- common/autotest_common.sh@887 -- # local i 00:27:31.305 01:57:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:31.305 01:57:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:31.305 01:57:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:31.563 01:57:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:31.822 [ 00:27:31.822 { 00:27:31.822 "name": "BaseBdev2", 00:27:31.822 "aliases": [ 00:27:31.822 "012d660f-e699-43b4-8343-3da6a5ea657d" 00:27:31.822 ], 00:27:31.822 "product_name": "Malloc disk", 00:27:31.822 "block_size": 512, 00:27:31.822 "num_blocks": 65536, 00:27:31.822 "uuid": "012d660f-e699-43b4-8343-3da6a5ea657d", 00:27:31.822 "assigned_rate_limits": { 00:27:31.822 "rw_ios_per_sec": 0, 00:27:31.822 "rw_mbytes_per_sec": 0, 00:27:31.822 "r_mbytes_per_sec": 0, 00:27:31.822 "w_mbytes_per_sec": 0 00:27:31.822 }, 00:27:31.822 "claimed": true, 00:27:31.822 "claim_type": "exclusive_write", 00:27:31.822 "zoned": false, 00:27:31.822 "supported_io_types": { 00:27:31.822 "read": true, 00:27:31.822 "write": true, 00:27:31.822 "unmap": true, 00:27:31.822 "write_zeroes": true, 00:27:31.822 "flush": true, 00:27:31.822 "reset": true, 00:27:31.822 "compare": false, 00:27:31.822 "compare_and_write": false, 00:27:31.822 "abort": true, 00:27:31.822 "nvme_admin": false, 00:27:31.822 "nvme_io": false 00:27:31.822 }, 00:27:31.822 "memory_domains": [ 00:27:31.822 { 00:27:31.822 "dma_device_id": "system", 00:27:31.822 "dma_device_type": 1 00:27:31.822 }, 00:27:31.822 { 00:27:31.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.822 "dma_device_type": 2 00:27:31.822 } 00:27:31.822 ], 00:27:31.822 "driver_specific": {} 00:27:31.822 } 00:27:31.822 ] 00:27:31.822 01:57:31 -- common/autotest_common.sh@893 -- # return 0 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:31.822 01:57:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.081 01:57:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.081 01:57:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:32.081 "name": "Existed_Raid", 00:27:32.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.081 "strip_size_kb": 0, 00:27:32.081 "state": "configuring", 00:27:32.081 "raid_level": "raid1", 00:27:32.081 "superblock": false, 00:27:32.081 "num_base_bdevs": 3, 00:27:32.081 "num_base_bdevs_discovered": 2, 00:27:32.081 "num_base_bdevs_operational": 3, 00:27:32.081 "base_bdevs_list": [ 00:27:32.081 { 00:27:32.081 "name": "BaseBdev1", 00:27:32.081 "uuid": "c8b98f2e-85b0-4fae-b669-513d23b09d43", 00:27:32.081 "is_configured": true, 00:27:32.081 "data_offset": 0, 00:27:32.081 "data_size": 65536 00:27:32.081 }, 00:27:32.081 { 00:27:32.081 "name": "BaseBdev2", 00:27:32.081 "uuid": "012d660f-e699-43b4-8343-3da6a5ea657d", 00:27:32.081 "is_configured": true, 00:27:32.081 "data_offset": 0, 00:27:32.081 "data_size": 65536 00:27:32.081 }, 00:27:32.081 { 00:27:32.081 "name": "BaseBdev3", 00:27:32.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.081 "is_configured": false, 00:27:32.081 "data_offset": 0, 00:27:32.081 "data_size": 0 00:27:32.081 } 00:27:32.081 ] 00:27:32.081 }' 00:27:32.081 01:57:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:32.081 01:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.017 01:57:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:33.018 [2024-04-24 01:57:33.064434] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:33.018 [2024-04-24 01:57:33.064513] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:33.018 [2024-04-24 01:57:33.064523] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:33.018 [2024-04-24 01:57:33.064686] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:27:33.018 [2024-04-24 01:57:33.065028] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:33.018 [2024-04-24 01:57:33.065049] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:27:33.018 [2024-04-24 01:57:33.065307] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.018 BaseBdev3 00:27:33.018 01:57:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:33.018 01:57:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:33.018 01:57:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:33.018 01:57:33 -- common/autotest_common.sh@887 -- # local i 00:27:33.018 01:57:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:33.018 01:57:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:33.018 01:57:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:33.582 01:57:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:33.582 [ 00:27:33.582 { 00:27:33.582 "name": "BaseBdev3", 00:27:33.582 "aliases": [ 00:27:33.582 "9007f9cd-a075-45f1-a714-2dbe948ccecf" 00:27:33.582 ], 00:27:33.582 "product_name": "Malloc disk", 00:27:33.582 "block_size": 512, 00:27:33.582 "num_blocks": 65536, 00:27:33.582 "uuid": "9007f9cd-a075-45f1-a714-2dbe948ccecf", 00:27:33.582 "assigned_rate_limits": { 00:27:33.582 "rw_ios_per_sec": 0, 00:27:33.582 "rw_mbytes_per_sec": 0, 00:27:33.582 "r_mbytes_per_sec": 0, 00:27:33.582 "w_mbytes_per_sec": 0 00:27:33.582 }, 00:27:33.582 "claimed": true, 00:27:33.582 "claim_type": "exclusive_write", 00:27:33.582 "zoned": false, 00:27:33.582 "supported_io_types": { 00:27:33.582 "read": true, 00:27:33.582 "write": true, 00:27:33.582 "unmap": true, 00:27:33.582 "write_zeroes": true, 00:27:33.582 "flush": true, 00:27:33.582 "reset": true, 00:27:33.582 "compare": false, 00:27:33.582 "compare_and_write": false, 00:27:33.582 "abort": true, 00:27:33.582 "nvme_admin": false, 00:27:33.582 "nvme_io": false 00:27:33.582 }, 00:27:33.582 "memory_domains": [ 00:27:33.582 { 00:27:33.582 "dma_device_id": "system", 00:27:33.582 "dma_device_type": 1 00:27:33.582 }, 00:27:33.582 { 00:27:33.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.582 "dma_device_type": 2 00:27:33.582 } 00:27:33.582 ], 00:27:33.582 "driver_specific": {} 00:27:33.582 } 00:27:33.582 ] 00:27:33.582 01:57:33 -- common/autotest_common.sh@893 -- # return 0 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.582 01:57:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.841 01:57:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:33.841 "name": "Existed_Raid", 00:27:33.841 "uuid": "be748f53-063f-41e9-afe3-7666b5e5392c", 00:27:33.841 "strip_size_kb": 0, 00:27:33.841 "state": "online", 00:27:33.841 "raid_level": "raid1", 00:27:33.841 "superblock": false, 00:27:33.841 "num_base_bdevs": 3, 00:27:33.841 "num_base_bdevs_discovered": 3, 00:27:33.841 "num_base_bdevs_operational": 3, 00:27:33.841 "base_bdevs_list": [ 00:27:33.841 { 00:27:33.841 "name": "BaseBdev1", 00:27:33.841 "uuid": "c8b98f2e-85b0-4fae-b669-513d23b09d43", 00:27:33.841 "is_configured": true, 00:27:33.841 "data_offset": 0, 00:27:33.841 "data_size": 65536 00:27:33.841 }, 00:27:33.841 { 00:27:33.841 "name": "BaseBdev2", 00:27:33.841 "uuid": "012d660f-e699-43b4-8343-3da6a5ea657d", 00:27:33.841 "is_configured": true, 00:27:33.841 "data_offset": 0, 00:27:33.841 "data_size": 65536 00:27:33.841 }, 00:27:33.841 { 00:27:33.841 "name": "BaseBdev3", 00:27:33.841 "uuid": "9007f9cd-a075-45f1-a714-2dbe948ccecf", 00:27:33.841 "is_configured": true, 00:27:33.841 "data_offset": 0, 00:27:33.841 "data_size": 65536 00:27:33.841 } 00:27:33.841 ] 00:27:33.841 }' 00:27:33.841 01:57:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:33.841 01:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:34.409 01:57:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:34.672 [2024-04-24 01:57:34.649169] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.931 01:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.189 01:57:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:35.189 "name": "Existed_Raid", 00:27:35.189 "uuid": "be748f53-063f-41e9-afe3-7666b5e5392c", 00:27:35.189 "strip_size_kb": 0, 00:27:35.189 "state": "online", 00:27:35.189 "raid_level": "raid1", 00:27:35.189 "superblock": false, 00:27:35.189 "num_base_bdevs": 3, 00:27:35.189 "num_base_bdevs_discovered": 2, 00:27:35.189 "num_base_bdevs_operational": 2, 00:27:35.189 "base_bdevs_list": [ 00:27:35.189 { 00:27:35.189 "name": null, 00:27:35.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.189 "is_configured": false, 00:27:35.189 "data_offset": 0, 00:27:35.189 "data_size": 65536 00:27:35.189 }, 00:27:35.189 { 00:27:35.189 "name": "BaseBdev2", 00:27:35.189 "uuid": "012d660f-e699-43b4-8343-3da6a5ea657d", 00:27:35.189 "is_configured": true, 00:27:35.189 "data_offset": 0, 00:27:35.189 "data_size": 65536 00:27:35.189 }, 00:27:35.189 { 00:27:35.189 "name": "BaseBdev3", 00:27:35.189 "uuid": "9007f9cd-a075-45f1-a714-2dbe948ccecf", 00:27:35.189 "is_configured": true, 00:27:35.189 "data_offset": 0, 00:27:35.189 "data_size": 65536 00:27:35.189 } 00:27:35.189 ] 00:27:35.189 }' 00:27:35.189 01:57:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:35.189 01:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:35.754 01:57:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:36.013 [2024-04-24 01:57:36.051376] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:36.271 01:57:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:36.271 01:57:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:36.271 01:57:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.271 01:57:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:36.529 01:57:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:36.529 01:57:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:36.529 01:57:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:36.529 [2024-04-24 01:57:36.567284] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:36.529 [2024-04-24 01:57:36.567381] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:36.787 [2024-04-24 01:57:36.683813] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.787 [2024-04-24 01:57:36.683950] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.787 [2024-04-24 01:57:36.683962] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:27:36.787 01:57:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:36.787 01:57:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:36.787 01:57:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.787 01:57:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:37.045 01:57:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:37.045 01:57:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:37.045 01:57:36 -- bdev/bdev_raid.sh@287 -- # killprocess 125518 00:27:37.045 01:57:36 -- common/autotest_common.sh@936 -- # '[' -z 125518 ']' 00:27:37.045 01:57:36 -- common/autotest_common.sh@940 -- # kill -0 125518 00:27:37.045 01:57:36 -- common/autotest_common.sh@941 -- # uname 00:27:37.045 01:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.045 01:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125518 00:27:37.045 01:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:37.045 01:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:37.045 01:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125518' 00:27:37.045 killing process with pid 125518 00:27:37.045 01:57:37 -- common/autotest_common.sh@955 -- # kill 125518 00:27:37.045 [2024-04-24 01:57:37.030474] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:37.045 [2024-04-24 01:57:37.030590] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:37.045 01:57:37 -- common/autotest_common.sh@960 -- # wait 125518 00:27:38.944 01:57:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:38.944 00:27:38.944 real 0m12.672s 00:27:38.944 user 0m21.417s 00:27:38.944 sys 0m1.795s 00:27:38.944 01:57:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:38.944 ************************************ 00:27:38.944 END TEST raid_state_function_test 00:27:38.944 ************************************ 00:27:38.944 01:57:38 -- common/autotest_common.sh@10 -- # set +x 00:27:38.944 01:57:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:27:38.944 01:57:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:38.944 01:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.945 01:57:38 -- common/autotest_common.sh@10 -- # set +x 00:27:38.945 ************************************ 00:27:38.945 START TEST raid_state_function_test_sb 00:27:38.945 ************************************ 00:27:38.945 01:57:38 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 true 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=125911 00:27:38.945 Process raid pid: 125911 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125911' 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:38.945 01:57:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125911 /var/tmp/spdk-raid.sock 00:27:38.945 01:57:38 -- common/autotest_common.sh@817 -- # '[' -z 125911 ']' 00:27:38.945 01:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:38.945 01:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:38.945 01:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:38.945 01:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.945 01:57:38 -- common/autotest_common.sh@10 -- # set +x 00:27:38.945 [2024-04-24 01:57:38.705528] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:27:38.945 [2024-04-24 01:57:38.705759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.945 [2024-04-24 01:57:38.894583] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.203 [2024-04-24 01:57:39.119695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.461 [2024-04-24 01:57:39.361233] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:39.720 01:57:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.720 01:57:39 -- common/autotest_common.sh@850 -- # return 0 00:27:39.720 01:57:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:39.980 [2024-04-24 01:57:39.850314] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:39.980 [2024-04-24 01:57:39.850405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:39.980 [2024-04-24 01:57:39.850417] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:39.980 [2024-04-24 01:57:39.850437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:39.980 [2024-04-24 01:57:39.850445] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:39.980 [2024-04-24 01:57:39.850486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.980 01:57:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.980 01:57:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:39.980 "name": "Existed_Raid", 00:27:39.980 "uuid": "35c7e601-d2f8-4ebe-a3ed-14dd5006b434", 00:27:39.980 "strip_size_kb": 0, 00:27:39.980 "state": "configuring", 00:27:39.980 "raid_level": "raid1", 00:27:39.980 "superblock": true, 00:27:39.980 "num_base_bdevs": 3, 00:27:39.980 "num_base_bdevs_discovered": 0, 00:27:39.980 "num_base_bdevs_operational": 3, 00:27:39.980 "base_bdevs_list": [ 00:27:39.980 { 00:27:39.980 "name": "BaseBdev1", 00:27:39.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.980 "is_configured": false, 00:27:39.980 "data_offset": 0, 00:27:39.980 "data_size": 0 00:27:39.980 }, 00:27:39.980 { 00:27:39.980 "name": "BaseBdev2", 00:27:39.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.980 "is_configured": false, 00:27:39.980 "data_offset": 0, 00:27:39.980 "data_size": 0 00:27:39.980 }, 00:27:39.980 { 00:27:39.980 "name": "BaseBdev3", 00:27:39.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.980 "is_configured": false, 00:27:39.980 "data_offset": 0, 00:27:39.980 "data_size": 0 00:27:39.980 } 00:27:39.980 ] 00:27:39.980 }' 00:27:39.980 01:57:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:39.980 01:57:40 -- common/autotest_common.sh@10 -- # set +x 00:27:40.916 01:57:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:40.916 [2024-04-24 01:57:40.926403] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:40.916 [2024-04-24 01:57:40.926459] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:27:40.916 01:57:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:41.480 [2024-04-24 01:57:41.298507] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:41.480 [2024-04-24 01:57:41.298599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:41.480 [2024-04-24 01:57:41.298611] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:41.480 [2024-04-24 01:57:41.298633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:41.480 [2024-04-24 01:57:41.298642] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:41.480 [2024-04-24 01:57:41.298668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:41.480 01:57:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:41.738 [2024-04-24 01:57:41.600408] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:41.738 BaseBdev1 00:27:41.738 01:57:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:41.738 01:57:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:41.738 01:57:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:41.738 01:57:41 -- common/autotest_common.sh@887 -- # local i 00:27:41.738 01:57:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:41.738 01:57:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:41.738 01:57:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:41.996 01:57:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:42.255 [ 00:27:42.255 { 00:27:42.255 "name": "BaseBdev1", 00:27:42.255 "aliases": [ 00:27:42.255 "3f3bacd8-3bd2-43d0-8e9c-2e664c677dd5" 00:27:42.255 ], 00:27:42.255 "product_name": "Malloc disk", 00:27:42.255 "block_size": 512, 00:27:42.255 "num_blocks": 65536, 00:27:42.255 "uuid": "3f3bacd8-3bd2-43d0-8e9c-2e664c677dd5", 00:27:42.255 "assigned_rate_limits": { 00:27:42.255 "rw_ios_per_sec": 0, 00:27:42.255 "rw_mbytes_per_sec": 0, 00:27:42.255 "r_mbytes_per_sec": 0, 00:27:42.255 "w_mbytes_per_sec": 0 00:27:42.255 }, 00:27:42.255 "claimed": true, 00:27:42.255 "claim_type": "exclusive_write", 00:27:42.255 "zoned": false, 00:27:42.255 "supported_io_types": { 00:27:42.255 "read": true, 00:27:42.255 "write": true, 00:27:42.255 "unmap": true, 00:27:42.255 "write_zeroes": true, 00:27:42.255 "flush": true, 00:27:42.255 "reset": true, 00:27:42.255 "compare": false, 00:27:42.255 "compare_and_write": false, 00:27:42.255 "abort": true, 00:27:42.255 "nvme_admin": false, 00:27:42.255 "nvme_io": false 00:27:42.255 }, 00:27:42.255 "memory_domains": [ 00:27:42.255 { 00:27:42.255 "dma_device_id": "system", 00:27:42.255 "dma_device_type": 1 00:27:42.255 }, 00:27:42.255 { 00:27:42.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.255 "dma_device_type": 2 00:27:42.255 } 00:27:42.255 ], 00:27:42.255 "driver_specific": {} 00:27:42.255 } 00:27:42.255 ] 00:27:42.255 01:57:42 -- common/autotest_common.sh@893 -- # return 0 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.255 01:57:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:42.255 "name": "Existed_Raid", 00:27:42.255 "uuid": "5c477b77-4940-4f72-b840-024769d467bc", 00:27:42.255 "strip_size_kb": 0, 00:27:42.255 "state": "configuring", 00:27:42.255 "raid_level": "raid1", 00:27:42.255 "superblock": true, 00:27:42.255 "num_base_bdevs": 3, 00:27:42.255 "num_base_bdevs_discovered": 1, 00:27:42.255 "num_base_bdevs_operational": 3, 00:27:42.255 "base_bdevs_list": [ 00:27:42.255 { 00:27:42.255 "name": "BaseBdev1", 00:27:42.255 "uuid": "3f3bacd8-3bd2-43d0-8e9c-2e664c677dd5", 00:27:42.255 "is_configured": true, 00:27:42.255 "data_offset": 2048, 00:27:42.255 "data_size": 63488 00:27:42.255 }, 00:27:42.255 { 00:27:42.255 "name": "BaseBdev2", 00:27:42.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.255 "is_configured": false, 00:27:42.256 "data_offset": 0, 00:27:42.256 "data_size": 0 00:27:42.256 }, 00:27:42.256 { 00:27:42.256 "name": "BaseBdev3", 00:27:42.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.256 "is_configured": false, 00:27:42.256 "data_offset": 0, 00:27:42.256 "data_size": 0 00:27:42.256 } 00:27:42.256 ] 00:27:42.256 }' 00:27:42.256 01:57:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:42.256 01:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:42.822 01:57:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:43.081 [2024-04-24 01:57:43.060787] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:43.081 [2024-04-24 01:57:43.060853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:27:43.081 01:57:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:27:43.081 01:57:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:43.339 01:57:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:43.597 BaseBdev1 00:27:43.597 01:57:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:27:43.597 01:57:43 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:43.597 01:57:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:43.597 01:57:43 -- common/autotest_common.sh@887 -- # local i 00:27:43.597 01:57:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:43.597 01:57:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:43.597 01:57:43 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:43.855 01:57:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:44.115 [ 00:27:44.115 { 00:27:44.115 "name": "BaseBdev1", 00:27:44.115 "aliases": [ 00:27:44.115 "8a619a9a-533d-4020-854b-1c2d7c384cfe" 00:27:44.115 ], 00:27:44.115 "product_name": "Malloc disk", 00:27:44.115 "block_size": 512, 00:27:44.115 "num_blocks": 65536, 00:27:44.115 "uuid": "8a619a9a-533d-4020-854b-1c2d7c384cfe", 00:27:44.115 "assigned_rate_limits": { 00:27:44.115 "rw_ios_per_sec": 0, 00:27:44.115 "rw_mbytes_per_sec": 0, 00:27:44.115 "r_mbytes_per_sec": 0, 00:27:44.115 "w_mbytes_per_sec": 0 00:27:44.115 }, 00:27:44.115 "claimed": false, 00:27:44.115 "zoned": false, 00:27:44.115 "supported_io_types": { 00:27:44.115 "read": true, 00:27:44.115 "write": true, 00:27:44.115 "unmap": true, 00:27:44.115 "write_zeroes": true, 00:27:44.115 "flush": true, 00:27:44.115 "reset": true, 00:27:44.115 "compare": false, 00:27:44.115 "compare_and_write": false, 00:27:44.115 "abort": true, 00:27:44.115 "nvme_admin": false, 00:27:44.115 "nvme_io": false 00:27:44.115 }, 00:27:44.115 "memory_domains": [ 00:27:44.115 { 00:27:44.115 "dma_device_id": "system", 00:27:44.115 "dma_device_type": 1 00:27:44.115 }, 00:27:44.115 { 00:27:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.115 "dma_device_type": 2 00:27:44.115 } 00:27:44.115 ], 00:27:44.115 "driver_specific": {} 00:27:44.115 } 00:27:44.115 ] 00:27:44.115 01:57:44 -- common/autotest_common.sh@893 -- # return 0 00:27:44.115 01:57:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:44.115 [2024-04-24 01:57:44.191358] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:44.115 [2024-04-24 01:57:44.193285] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:44.115 [2024-04-24 01:57:44.193345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:44.115 [2024-04-24 01:57:44.193354] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:44.115 [2024-04-24 01:57:44.193378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:44.373 "name": "Existed_Raid", 00:27:44.373 "uuid": "738bc720-0a39-4df5-bbab-65c4d26f6e9d", 00:27:44.373 "strip_size_kb": 0, 00:27:44.373 "state": "configuring", 00:27:44.373 "raid_level": "raid1", 00:27:44.373 "superblock": true, 00:27:44.373 "num_base_bdevs": 3, 00:27:44.373 "num_base_bdevs_discovered": 1, 00:27:44.373 "num_base_bdevs_operational": 3, 00:27:44.373 "base_bdevs_list": [ 00:27:44.373 { 00:27:44.373 "name": "BaseBdev1", 00:27:44.373 "uuid": "8a619a9a-533d-4020-854b-1c2d7c384cfe", 00:27:44.373 "is_configured": true, 00:27:44.373 "data_offset": 2048, 00:27:44.373 "data_size": 63488 00:27:44.373 }, 00:27:44.373 { 00:27:44.373 "name": "BaseBdev2", 00:27:44.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.373 "is_configured": false, 00:27:44.373 "data_offset": 0, 00:27:44.373 "data_size": 0 00:27:44.373 }, 00:27:44.373 { 00:27:44.373 "name": "BaseBdev3", 00:27:44.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.373 "is_configured": false, 00:27:44.373 "data_offset": 0, 00:27:44.373 "data_size": 0 00:27:44.373 } 00:27:44.373 ] 00:27:44.373 }' 00:27:44.373 01:57:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:44.373 01:57:44 -- common/autotest_common.sh@10 -- # set +x 00:27:44.939 01:57:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:45.196 [2024-04-24 01:57:45.224537] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:45.196 BaseBdev2 00:27:45.196 01:57:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:45.196 01:57:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:45.196 01:57:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:45.196 01:57:45 -- common/autotest_common.sh@887 -- # local i 00:27:45.196 01:57:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:45.196 01:57:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:45.196 01:57:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.455 01:57:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:45.713 [ 00:27:45.713 { 00:27:45.713 "name": "BaseBdev2", 00:27:45.713 "aliases": [ 00:27:45.713 "38e43fe6-4bc5-4512-991e-7e20d66fd016" 00:27:45.713 ], 00:27:45.713 "product_name": "Malloc disk", 00:27:45.713 "block_size": 512, 00:27:45.713 "num_blocks": 65536, 00:27:45.713 "uuid": "38e43fe6-4bc5-4512-991e-7e20d66fd016", 00:27:45.713 "assigned_rate_limits": { 00:27:45.713 "rw_ios_per_sec": 0, 00:27:45.713 "rw_mbytes_per_sec": 0, 00:27:45.713 "r_mbytes_per_sec": 0, 00:27:45.713 "w_mbytes_per_sec": 0 00:27:45.713 }, 00:27:45.713 "claimed": true, 00:27:45.713 "claim_type": "exclusive_write", 00:27:45.713 "zoned": false, 00:27:45.713 "supported_io_types": { 00:27:45.713 "read": true, 00:27:45.713 "write": true, 00:27:45.714 "unmap": true, 00:27:45.714 "write_zeroes": true, 00:27:45.714 "flush": true, 00:27:45.714 "reset": true, 00:27:45.714 "compare": false, 00:27:45.714 "compare_and_write": false, 00:27:45.714 "abort": true, 00:27:45.714 "nvme_admin": false, 00:27:45.714 "nvme_io": false 00:27:45.714 }, 00:27:45.714 "memory_domains": [ 00:27:45.714 { 00:27:45.714 "dma_device_id": "system", 00:27:45.714 "dma_device_type": 1 00:27:45.714 }, 00:27:45.714 { 00:27:45.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.714 "dma_device_type": 2 00:27:45.714 } 00:27:45.714 ], 00:27:45.714 "driver_specific": {} 00:27:45.714 } 00:27:45.714 ] 00:27:45.714 01:57:45 -- common/autotest_common.sh@893 -- # return 0 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.714 01:57:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.994 01:57:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:45.994 "name": "Existed_Raid", 00:27:45.994 "uuid": "738bc720-0a39-4df5-bbab-65c4d26f6e9d", 00:27:45.994 "strip_size_kb": 0, 00:27:45.994 "state": "configuring", 00:27:45.994 "raid_level": "raid1", 00:27:45.994 "superblock": true, 00:27:45.994 "num_base_bdevs": 3, 00:27:45.994 "num_base_bdevs_discovered": 2, 00:27:45.994 "num_base_bdevs_operational": 3, 00:27:45.994 "base_bdevs_list": [ 00:27:45.994 { 00:27:45.994 "name": "BaseBdev1", 00:27:45.994 "uuid": "8a619a9a-533d-4020-854b-1c2d7c384cfe", 00:27:45.994 "is_configured": true, 00:27:45.994 "data_offset": 2048, 00:27:45.994 "data_size": 63488 00:27:45.994 }, 00:27:45.994 { 00:27:45.994 "name": "BaseBdev2", 00:27:45.994 "uuid": "38e43fe6-4bc5-4512-991e-7e20d66fd016", 00:27:45.994 "is_configured": true, 00:27:45.994 "data_offset": 2048, 00:27:45.994 "data_size": 63488 00:27:45.994 }, 00:27:45.994 { 00:27:45.994 "name": "BaseBdev3", 00:27:45.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.994 "is_configured": false, 00:27:45.994 "data_offset": 0, 00:27:45.994 "data_size": 0 00:27:45.994 } 00:27:45.994 ] 00:27:45.994 }' 00:27:45.994 01:57:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:45.994 01:57:45 -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 01:57:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:46.819 [2024-04-24 01:57:46.647775] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:46.819 [2024-04-24 01:57:46.648032] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:46.819 [2024-04-24 01:57:46.648046] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:46.819 [2024-04-24 01:57:46.648224] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:27:46.819 [2024-04-24 01:57:46.648562] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:46.819 [2024-04-24 01:57:46.648573] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:27:46.819 [2024-04-24 01:57:46.648733] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.819 BaseBdev3 00:27:46.819 01:57:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:46.819 01:57:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:46.819 01:57:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:46.820 01:57:46 -- common/autotest_common.sh@887 -- # local i 00:27:46.820 01:57:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:46.820 01:57:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:46.820 01:57:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:47.078 01:57:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:47.078 [ 00:27:47.078 { 00:27:47.078 "name": "BaseBdev3", 00:27:47.078 "aliases": [ 00:27:47.078 "ee5cf284-9f32-45f0-82a1-abbb77e14da7" 00:27:47.078 ], 00:27:47.078 "product_name": "Malloc disk", 00:27:47.078 "block_size": 512, 00:27:47.078 "num_blocks": 65536, 00:27:47.078 "uuid": "ee5cf284-9f32-45f0-82a1-abbb77e14da7", 00:27:47.078 "assigned_rate_limits": { 00:27:47.078 "rw_ios_per_sec": 0, 00:27:47.078 "rw_mbytes_per_sec": 0, 00:27:47.078 "r_mbytes_per_sec": 0, 00:27:47.078 "w_mbytes_per_sec": 0 00:27:47.078 }, 00:27:47.078 "claimed": true, 00:27:47.078 "claim_type": "exclusive_write", 00:27:47.078 "zoned": false, 00:27:47.078 "supported_io_types": { 00:27:47.078 "read": true, 00:27:47.078 "write": true, 00:27:47.078 "unmap": true, 00:27:47.078 "write_zeroes": true, 00:27:47.078 "flush": true, 00:27:47.078 "reset": true, 00:27:47.078 "compare": false, 00:27:47.078 "compare_and_write": false, 00:27:47.078 "abort": true, 00:27:47.078 "nvme_admin": false, 00:27:47.078 "nvme_io": false 00:27:47.078 }, 00:27:47.078 "memory_domains": [ 00:27:47.078 { 00:27:47.078 "dma_device_id": "system", 00:27:47.078 "dma_device_type": 1 00:27:47.078 }, 00:27:47.078 { 00:27:47.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.078 "dma_device_type": 2 00:27:47.078 } 00:27:47.078 ], 00:27:47.078 "driver_specific": {} 00:27:47.078 } 00:27:47.078 ] 00:27:47.078 01:57:47 -- common/autotest_common.sh@893 -- # return 0 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.078 01:57:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.337 01:57:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:47.337 "name": "Existed_Raid", 00:27:47.337 "uuid": "738bc720-0a39-4df5-bbab-65c4d26f6e9d", 00:27:47.337 "strip_size_kb": 0, 00:27:47.337 "state": "online", 00:27:47.337 "raid_level": "raid1", 00:27:47.337 "superblock": true, 00:27:47.337 "num_base_bdevs": 3, 00:27:47.337 "num_base_bdevs_discovered": 3, 00:27:47.337 "num_base_bdevs_operational": 3, 00:27:47.337 "base_bdevs_list": [ 00:27:47.337 { 00:27:47.337 "name": "BaseBdev1", 00:27:47.337 "uuid": "8a619a9a-533d-4020-854b-1c2d7c384cfe", 00:27:47.337 "is_configured": true, 00:27:47.337 "data_offset": 2048, 00:27:47.337 "data_size": 63488 00:27:47.337 }, 00:27:47.337 { 00:27:47.337 "name": "BaseBdev2", 00:27:47.337 "uuid": "38e43fe6-4bc5-4512-991e-7e20d66fd016", 00:27:47.337 "is_configured": true, 00:27:47.337 "data_offset": 2048, 00:27:47.337 "data_size": 63488 00:27:47.337 }, 00:27:47.337 { 00:27:47.337 "name": "BaseBdev3", 00:27:47.337 "uuid": "ee5cf284-9f32-45f0-82a1-abbb77e14da7", 00:27:47.337 "is_configured": true, 00:27:47.337 "data_offset": 2048, 00:27:47.337 "data_size": 63488 00:27:47.337 } 00:27:47.337 ] 00:27:47.337 }' 00:27:47.337 01:57:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:47.337 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:27:47.905 01:57:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:48.163 [2024-04-24 01:57:48.053148] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.163 01:57:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.422 01:57:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:48.422 "name": "Existed_Raid", 00:27:48.422 "uuid": "738bc720-0a39-4df5-bbab-65c4d26f6e9d", 00:27:48.422 "strip_size_kb": 0, 00:27:48.422 "state": "online", 00:27:48.422 "raid_level": "raid1", 00:27:48.422 "superblock": true, 00:27:48.422 "num_base_bdevs": 3, 00:27:48.422 "num_base_bdevs_discovered": 2, 00:27:48.422 "num_base_bdevs_operational": 2, 00:27:48.422 "base_bdevs_list": [ 00:27:48.422 { 00:27:48.422 "name": null, 00:27:48.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.422 "is_configured": false, 00:27:48.422 "data_offset": 2048, 00:27:48.422 "data_size": 63488 00:27:48.422 }, 00:27:48.422 { 00:27:48.422 "name": "BaseBdev2", 00:27:48.422 "uuid": "38e43fe6-4bc5-4512-991e-7e20d66fd016", 00:27:48.422 "is_configured": true, 00:27:48.422 "data_offset": 2048, 00:27:48.422 "data_size": 63488 00:27:48.422 }, 00:27:48.422 { 00:27:48.422 "name": "BaseBdev3", 00:27:48.422 "uuid": "ee5cf284-9f32-45f0-82a1-abbb77e14da7", 00:27:48.422 "is_configured": true, 00:27:48.422 "data_offset": 2048, 00:27:48.422 "data_size": 63488 00:27:48.422 } 00:27:48.422 ] 00:27:48.422 }' 00:27:48.422 01:57:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:48.422 01:57:48 -- common/autotest_common.sh@10 -- # set +x 00:27:48.990 01:57:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:48.990 01:57:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:48.990 01:57:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.990 01:57:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:49.248 01:57:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:49.248 01:57:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:49.248 01:57:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:49.505 [2024-04-24 01:57:49.447066] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:49.505 01:57:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:49.505 01:57:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:49.505 01:57:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.505 01:57:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:50.073 01:57:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:50.073 01:57:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:50.073 01:57:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:50.073 [2024-04-24 01:57:50.099406] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:50.073 [2024-04-24 01:57:50.099501] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.331 [2024-04-24 01:57:50.202233] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.331 [2024-04-24 01:57:50.202360] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.331 [2024-04-24 01:57:50.202371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:50.331 01:57:50 -- bdev/bdev_raid.sh@287 -- # killprocess 125911 00:27:50.331 01:57:50 -- common/autotest_common.sh@936 -- # '[' -z 125911 ']' 00:27:50.331 01:57:50 -- common/autotest_common.sh@940 -- # kill -0 125911 00:27:50.331 01:57:50 -- common/autotest_common.sh@941 -- # uname 00:27:50.331 01:57:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:50.331 01:57:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125911 00:27:50.589 01:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:50.589 01:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:50.589 01:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125911' 00:27:50.589 killing process with pid 125911 00:27:50.589 01:57:50 -- common/autotest_common.sh@955 -- # kill 125911 00:27:50.589 [2024-04-24 01:57:50.433220] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:50.589 01:57:50 -- common/autotest_common.sh@960 -- # wait 125911 00:27:50.589 [2024-04-24 01:57:50.433364] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:51.965 ************************************ 00:27:51.965 END TEST raid_state_function_test_sb 00:27:51.965 ************************************ 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:51.965 00:27:51.965 real 0m13.164s 00:27:51.965 user 0m22.289s 00:27:51.965 sys 0m1.866s 00:27:51.965 01:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:51.965 01:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:27:51.965 01:57:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:27:51.965 01:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:51.965 01:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:51.965 ************************************ 00:27:51.965 START TEST raid_superblock_test 00:27:51.965 ************************************ 00:27:51.965 01:57:51 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 3 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=126310 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:51.965 01:57:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126310 /var/tmp/spdk-raid.sock 00:27:51.965 01:57:51 -- common/autotest_common.sh@817 -- # '[' -z 126310 ']' 00:27:51.965 01:57:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:51.965 01:57:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:51.965 01:57:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:51.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:51.965 01:57:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:51.965 01:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:51.965 [2024-04-24 01:57:51.966105] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:27:51.965 [2024-04-24 01:57:51.966528] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126310 ] 00:27:52.224 [2024-04-24 01:57:52.145173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.482 [2024-04-24 01:57:52.413436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.742 [2024-04-24 01:57:52.686194] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:53.000 01:57:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:53.000 01:57:52 -- common/autotest_common.sh@850 -- # return 0 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:53.000 01:57:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:53.258 malloc1 00:27:53.258 01:57:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:53.516 [2024-04-24 01:57:53.355251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:53.516 [2024-04-24 01:57:53.355534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.516 [2024-04-24 01:57:53.355720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:53.516 [2024-04-24 01:57:53.355843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.516 [2024-04-24 01:57:53.358983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.516 [2024-04-24 01:57:53.359172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:53.516 pt1 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:53.516 01:57:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:53.775 malloc2 00:27:53.775 01:57:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:54.107 [2024-04-24 01:57:53.890478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:54.107 [2024-04-24 01:57:53.890756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.107 [2024-04-24 01:57:53.890834] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:54.107 [2024-04-24 01:57:53.890996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.107 [2024-04-24 01:57:53.893551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.107 [2024-04-24 01:57:53.893707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:54.107 pt2 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:54.107 01:57:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:54.107 malloc3 00:27:54.107 01:57:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:54.366 [2024-04-24 01:57:54.348433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:54.366 [2024-04-24 01:57:54.348704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.366 [2024-04-24 01:57:54.348787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:54.366 [2024-04-24 01:57:54.348930] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.366 [2024-04-24 01:57:54.351431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.366 [2024-04-24 01:57:54.351612] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:54.366 pt3 00:27:54.366 01:57:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:54.366 01:57:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:54.366 01:57:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:27:54.626 [2024-04-24 01:57:54.584601] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:54.626 [2024-04-24 01:57:54.587255] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:54.626 [2024-04-24 01:57:54.587467] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:54.626 [2024-04-24 01:57:54.587794] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:27:54.626 [2024-04-24 01:57:54.587919] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:54.626 [2024-04-24 01:57:54.588126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:54.626 [2024-04-24 01:57:54.588670] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:27:54.626 [2024-04-24 01:57:54.588804] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:27:54.626 [2024-04-24 01:57:54.589119] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.626 01:57:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.885 01:57:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:54.885 "name": "raid_bdev1", 00:27:54.885 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:27:54.885 "strip_size_kb": 0, 00:27:54.885 "state": "online", 00:27:54.885 "raid_level": "raid1", 00:27:54.885 "superblock": true, 00:27:54.885 "num_base_bdevs": 3, 00:27:54.885 "num_base_bdevs_discovered": 3, 00:27:54.885 "num_base_bdevs_operational": 3, 00:27:54.885 "base_bdevs_list": [ 00:27:54.885 { 00:27:54.885 "name": "pt1", 00:27:54.885 "uuid": "a6da1c4d-374e-5e97-a9dd-f11b64dee2c1", 00:27:54.885 "is_configured": true, 00:27:54.885 "data_offset": 2048, 00:27:54.885 "data_size": 63488 00:27:54.885 }, 00:27:54.885 { 00:27:54.885 "name": "pt2", 00:27:54.885 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:27:54.885 "is_configured": true, 00:27:54.885 "data_offset": 2048, 00:27:54.885 "data_size": 63488 00:27:54.885 }, 00:27:54.885 { 00:27:54.885 "name": "pt3", 00:27:54.885 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:27:54.885 "is_configured": true, 00:27:54.885 "data_offset": 2048, 00:27:54.885 "data_size": 63488 00:27:54.885 } 00:27:54.885 ] 00:27:54.885 }' 00:27:54.885 01:57:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:54.885 01:57:54 -- common/autotest_common.sh@10 -- # set +x 00:27:55.455 01:57:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:55.455 01:57:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:27:55.713 [2024-04-24 01:57:55.713510] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:55.713 01:57:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=eadb8286-408e-4c8e-8e7f-d4eafe66da1b 00:27:55.713 01:57:55 -- bdev/bdev_raid.sh@380 -- # '[' -z eadb8286-408e-4c8e-8e7f-d4eafe66da1b ']' 00:27:55.713 01:57:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:55.972 [2024-04-24 01:57:55.989307] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:55.972 [2024-04-24 01:57:55.989497] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.972 [2024-04-24 01:57:55.989747] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.972 [2024-04-24 01:57:55.989940] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.972 [2024-04-24 01:57:55.990032] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:55.972 01:57:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:27:55.972 01:57:56 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.230 01:57:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:27:56.230 01:57:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:27:56.230 01:57:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:56.230 01:57:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:56.489 01:57:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:56.489 01:57:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:56.747 01:57:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:56.747 01:57:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:57.005 01:57:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:57.005 01:57:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:57.265 01:57:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:27:57.265 01:57:57 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:57.265 01:57:57 -- common/autotest_common.sh@638 -- # local es=0 00:27:57.265 01:57:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:57.265 01:57:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.265 01:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:57.265 01:57:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.265 01:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:57.265 01:57:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.265 01:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:57.265 01:57:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.265 01:57:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:57.265 01:57:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:57.553 [2024-04-24 01:57:57.381552] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:57.553 [2024-04-24 01:57:57.383824] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:57.553 [2024-04-24 01:57:57.384013] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:57.553 [2024-04-24 01:57:57.384098] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:27:57.553 [2024-04-24 01:57:57.384355] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:27:57.553 [2024-04-24 01:57:57.384523] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:27:57.553 [2024-04-24 01:57:57.384596] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.553 [2024-04-24 01:57:57.384706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:27:57.553 request: 00:27:57.553 { 00:27:57.553 "name": "raid_bdev1", 00:27:57.553 "raid_level": "raid1", 00:27:57.553 "base_bdevs": [ 00:27:57.553 "malloc1", 00:27:57.553 "malloc2", 00:27:57.553 "malloc3" 00:27:57.553 ], 00:27:57.553 "superblock": false, 00:27:57.553 "method": "bdev_raid_create", 00:27:57.553 "req_id": 1 00:27:57.553 } 00:27:57.553 Got JSON-RPC error response 00:27:57.553 response: 00:27:57.553 { 00:27:57.553 "code": -17, 00:27:57.553 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:57.553 } 00:27:57.553 01:57:57 -- common/autotest_common.sh@641 -- # es=1 00:27:57.553 01:57:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:57.553 01:57:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:57.553 01:57:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:57.553 01:57:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.553 01:57:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:57.812 [2024-04-24 01:57:57.845600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:57.812 [2024-04-24 01:57:57.845827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.812 [2024-04-24 01:57:57.845936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:57.812 [2024-04-24 01:57:57.846082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.812 [2024-04-24 01:57:57.848674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.812 [2024-04-24 01:57:57.848842] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:57.812 [2024-04-24 01:57:57.849068] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:57.812 [2024-04-24 01:57:57.849219] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:57.812 pt1 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.812 01:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.070 01:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:58.070 "name": "raid_bdev1", 00:27:58.070 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:27:58.070 "strip_size_kb": 0, 00:27:58.070 "state": "configuring", 00:27:58.070 "raid_level": "raid1", 00:27:58.070 "superblock": true, 00:27:58.070 "num_base_bdevs": 3, 00:27:58.070 "num_base_bdevs_discovered": 1, 00:27:58.070 "num_base_bdevs_operational": 3, 00:27:58.070 "base_bdevs_list": [ 00:27:58.070 { 00:27:58.070 "name": "pt1", 00:27:58.070 "uuid": "a6da1c4d-374e-5e97-a9dd-f11b64dee2c1", 00:27:58.070 "is_configured": true, 00:27:58.070 "data_offset": 2048, 00:27:58.070 "data_size": 63488 00:27:58.070 }, 00:27:58.070 { 00:27:58.070 "name": null, 00:27:58.070 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:27:58.070 "is_configured": false, 00:27:58.070 "data_offset": 2048, 00:27:58.070 "data_size": 63488 00:27:58.070 }, 00:27:58.070 { 00:27:58.070 "name": null, 00:27:58.070 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:27:58.070 "is_configured": false, 00:27:58.070 "data_offset": 2048, 00:27:58.070 "data_size": 63488 00:27:58.070 } 00:27:58.070 ] 00:27:58.070 }' 00:27:58.070 01:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:58.070 01:57:58 -- common/autotest_common.sh@10 -- # set +x 00:27:58.636 01:57:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:27:58.636 01:57:58 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:58.636 [2024-04-24 01:57:58.713768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:58.636 [2024-04-24 01:57:58.714059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.636 [2024-04-24 01:57:58.714144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:58.636 [2024-04-24 01:57:58.714241] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.636 [2024-04-24 01:57:58.714762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.636 [2024-04-24 01:57:58.714913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:58.636 [2024-04-24 01:57:58.715155] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:58.636 [2024-04-24 01:57:58.715280] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.636 pt2 00:27:58.894 01:57:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:59.153 [2024-04-24 01:57:58.997908] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:59.153 01:57:59 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:59.153 01:57:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:59.153 01:57:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:59.153 01:57:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:59.153 01:57:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:59.154 "name": "raid_bdev1", 00:27:59.154 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:27:59.154 "strip_size_kb": 0, 00:27:59.154 "state": "configuring", 00:27:59.154 "raid_level": "raid1", 00:27:59.154 "superblock": true, 00:27:59.154 "num_base_bdevs": 3, 00:27:59.154 "num_base_bdevs_discovered": 1, 00:27:59.154 "num_base_bdevs_operational": 3, 00:27:59.154 "base_bdevs_list": [ 00:27:59.154 { 00:27:59.154 "name": "pt1", 00:27:59.154 "uuid": "a6da1c4d-374e-5e97-a9dd-f11b64dee2c1", 00:27:59.154 "is_configured": true, 00:27:59.154 "data_offset": 2048, 00:27:59.154 "data_size": 63488 00:27:59.154 }, 00:27:59.154 { 00:27:59.154 "name": null, 00:27:59.154 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:27:59.154 "is_configured": false, 00:27:59.154 "data_offset": 2048, 00:27:59.154 "data_size": 63488 00:27:59.154 }, 00:27:59.154 { 00:27:59.154 "name": null, 00:27:59.154 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:27:59.154 "is_configured": false, 00:27:59.154 "data_offset": 2048, 00:27:59.154 "data_size": 63488 00:27:59.154 } 00:27:59.154 ] 00:27:59.154 }' 00:27:59.154 01:57:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:59.154 01:57:59 -- common/autotest_common.sh@10 -- # set +x 00:28:00.097 01:57:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:28:00.097 01:57:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:00.097 01:57:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:00.097 [2024-04-24 01:58:00.130165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:00.097 [2024-04-24 01:58:00.130498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.097 [2024-04-24 01:58:00.130588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:00.097 [2024-04-24 01:58:00.130732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.097 [2024-04-24 01:58:00.131344] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.097 [2024-04-24 01:58:00.131549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:00.097 [2024-04-24 01:58:00.131840] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:00.097 [2024-04-24 01:58:00.131979] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:00.097 pt2 00:28:00.097 01:58:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:00.097 01:58:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:00.097 01:58:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:00.358 [2024-04-24 01:58:00.390165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:00.358 [2024-04-24 01:58:00.390419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.358 [2024-04-24 01:58:00.390494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:28:00.358 [2024-04-24 01:58:00.390601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.358 [2024-04-24 01:58:00.391124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.358 [2024-04-24 01:58:00.391276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:00.358 [2024-04-24 01:58:00.391509] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:00.358 [2024-04-24 01:58:00.391622] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:00.358 [2024-04-24 01:58:00.391793] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:00.358 [2024-04-24 01:58:00.391883] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:00.358 [2024-04-24 01:58:00.392066] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:00.358 [2024-04-24 01:58:00.392476] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:00.358 [2024-04-24 01:58:00.392588] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:28:00.358 [2024-04-24 01:58:00.392808] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.358 pt3 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.358 01:58:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.617 01:58:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:00.617 "name": "raid_bdev1", 00:28:00.617 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:00.617 "strip_size_kb": 0, 00:28:00.617 "state": "online", 00:28:00.617 "raid_level": "raid1", 00:28:00.617 "superblock": true, 00:28:00.617 "num_base_bdevs": 3, 00:28:00.617 "num_base_bdevs_discovered": 3, 00:28:00.617 "num_base_bdevs_operational": 3, 00:28:00.617 "base_bdevs_list": [ 00:28:00.617 { 00:28:00.617 "name": "pt1", 00:28:00.617 "uuid": "a6da1c4d-374e-5e97-a9dd-f11b64dee2c1", 00:28:00.617 "is_configured": true, 00:28:00.617 "data_offset": 2048, 00:28:00.617 "data_size": 63488 00:28:00.617 }, 00:28:00.617 { 00:28:00.617 "name": "pt2", 00:28:00.617 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:00.617 "is_configured": true, 00:28:00.617 "data_offset": 2048, 00:28:00.617 "data_size": 63488 00:28:00.617 }, 00:28:00.617 { 00:28:00.617 "name": "pt3", 00:28:00.617 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:00.617 "is_configured": true, 00:28:00.617 "data_offset": 2048, 00:28:00.617 "data_size": 63488 00:28:00.617 } 00:28:00.617 ] 00:28:00.617 }' 00:28:00.617 01:58:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:00.617 01:58:00 -- common/autotest_common.sh@10 -- # set +x 00:28:01.242 01:58:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:01.242 01:58:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:28:01.502 [2024-04-24 01:58:01.534786] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:01.502 01:58:01 -- bdev/bdev_raid.sh@430 -- # '[' eadb8286-408e-4c8e-8e7f-d4eafe66da1b '!=' eadb8286-408e-4c8e-8e7f-d4eafe66da1b ']' 00:28:01.502 01:58:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:28:01.502 01:58:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:01.502 01:58:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:01.502 01:58:01 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:01.760 [2024-04-24 01:58:01.782819] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.760 01:58:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.018 01:58:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.018 "name": "raid_bdev1", 00:28:02.018 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:02.018 "strip_size_kb": 0, 00:28:02.018 "state": "online", 00:28:02.018 "raid_level": "raid1", 00:28:02.018 "superblock": true, 00:28:02.018 "num_base_bdevs": 3, 00:28:02.018 "num_base_bdevs_discovered": 2, 00:28:02.018 "num_base_bdevs_operational": 2, 00:28:02.018 "base_bdevs_list": [ 00:28:02.018 { 00:28:02.018 "name": null, 00:28:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.018 "is_configured": false, 00:28:02.018 "data_offset": 2048, 00:28:02.018 "data_size": 63488 00:28:02.018 }, 00:28:02.018 { 00:28:02.018 "name": "pt2", 00:28:02.018 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:02.018 "is_configured": true, 00:28:02.018 "data_offset": 2048, 00:28:02.018 "data_size": 63488 00:28:02.018 }, 00:28:02.018 { 00:28:02.018 "name": "pt3", 00:28:02.018 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:02.018 "is_configured": true, 00:28:02.018 "data_offset": 2048, 00:28:02.018 "data_size": 63488 00:28:02.018 } 00:28:02.018 ] 00:28:02.018 }' 00:28:02.018 01:58:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.018 01:58:02 -- common/autotest_common.sh@10 -- # set +x 00:28:02.584 01:58:02 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:02.842 [2024-04-24 01:58:02.830896] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:02.842 [2024-04-24 01:58:02.831161] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.842 [2024-04-24 01:58:02.831378] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.842 [2024-04-24 01:58:02.831563] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:02.842 [2024-04-24 01:58:02.831650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:28:02.842 01:58:02 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.842 01:58:02 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:28:03.099 01:58:03 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:28:03.099 01:58:03 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:28:03.099 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:28:03.099 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.099 01:58:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:03.358 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:03.358 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.358 01:58:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:03.616 [2024-04-24 01:58:03.658987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:03.616 [2024-04-24 01:58:03.659381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.616 [2024-04-24 01:58:03.659547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:03.616 [2024-04-24 01:58:03.659670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.616 [2024-04-24 01:58:03.662744] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.616 [2024-04-24 01:58:03.662959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:03.616 [2024-04-24 01:58:03.663292] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:03.616 [2024-04-24 01:58:03.663465] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:03.616 pt2 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.616 01:58:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.875 01:58:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:03.875 "name": "raid_bdev1", 00:28:03.875 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:03.875 "strip_size_kb": 0, 00:28:03.875 "state": "configuring", 00:28:03.875 "raid_level": "raid1", 00:28:03.875 "superblock": true, 00:28:03.875 "num_base_bdevs": 3, 00:28:03.875 "num_base_bdevs_discovered": 1, 00:28:03.875 "num_base_bdevs_operational": 2, 00:28:03.875 "base_bdevs_list": [ 00:28:03.875 { 00:28:03.875 "name": null, 00:28:03.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.875 "is_configured": false, 00:28:03.875 "data_offset": 2048, 00:28:03.875 "data_size": 63488 00:28:03.875 }, 00:28:03.875 { 00:28:03.875 "name": "pt2", 00:28:03.875 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:03.875 "is_configured": true, 00:28:03.875 "data_offset": 2048, 00:28:03.875 "data_size": 63488 00:28:03.875 }, 00:28:03.875 { 00:28:03.875 "name": null, 00:28:03.875 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:03.875 "is_configured": false, 00:28:03.875 "data_offset": 2048, 00:28:03.875 "data_size": 63488 00:28:03.875 } 00:28:03.875 ] 00:28:03.875 }' 00:28:03.875 01:58:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:03.875 01:58:03 -- common/autotest_common.sh@10 -- # set +x 00:28:04.459 01:58:04 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:04.459 01:58:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:04.459 01:58:04 -- bdev/bdev_raid.sh@462 -- # i=2 00:28:04.459 01:58:04 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:04.718 [2024-04-24 01:58:04.691609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:04.718 [2024-04-24 01:58:04.692040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.718 [2024-04-24 01:58:04.692181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:04.718 [2024-04-24 01:58:04.692464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.718 [2024-04-24 01:58:04.693226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.718 [2024-04-24 01:58:04.693421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:04.718 [2024-04-24 01:58:04.693735] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:04.718 [2024-04-24 01:58:04.693910] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:04.718 [2024-04-24 01:58:04.694147] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:28:04.718 [2024-04-24 01:58:04.694260] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:04.718 [2024-04-24 01:58:04.694528] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:04.718 [2024-04-24 01:58:04.695087] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:28:04.718 [2024-04-24 01:58:04.695226] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:28:04.718 [2024-04-24 01:58:04.695565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.718 pt3 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.718 01:58:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.975 01:58:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:04.975 "name": "raid_bdev1", 00:28:04.975 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:04.975 "strip_size_kb": 0, 00:28:04.975 "state": "online", 00:28:04.975 "raid_level": "raid1", 00:28:04.975 "superblock": true, 00:28:04.975 "num_base_bdevs": 3, 00:28:04.975 "num_base_bdevs_discovered": 2, 00:28:04.975 "num_base_bdevs_operational": 2, 00:28:04.975 "base_bdevs_list": [ 00:28:04.975 { 00:28:04.975 "name": null, 00:28:04.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.975 "is_configured": false, 00:28:04.975 "data_offset": 2048, 00:28:04.975 "data_size": 63488 00:28:04.975 }, 00:28:04.975 { 00:28:04.975 "name": "pt2", 00:28:04.975 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:04.975 "is_configured": true, 00:28:04.975 "data_offset": 2048, 00:28:04.975 "data_size": 63488 00:28:04.975 }, 00:28:04.975 { 00:28:04.975 "name": "pt3", 00:28:04.975 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:04.975 "is_configured": true, 00:28:04.975 "data_offset": 2048, 00:28:04.975 "data_size": 63488 00:28:04.975 } 00:28:04.975 ] 00:28:04.975 }' 00:28:04.975 01:58:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:04.975 01:58:05 -- common/autotest_common.sh@10 -- # set +x 00:28:05.910 01:58:05 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:28:05.910 01:58:05 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:05.910 [2024-04-24 01:58:05.947937] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:05.910 [2024-04-24 01:58:05.948280] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:05.910 [2024-04-24 01:58:05.948514] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.910 [2024-04-24 01:58:05.948692] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.910 [2024-04-24 01:58:05.948792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:28:05.910 01:58:05 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.910 01:58:05 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:28:06.168 01:58:06 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:28:06.168 01:58:06 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:28:06.168 01:58:06 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:06.427 [2024-04-24 01:58:06.475970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:06.427 [2024-04-24 01:58:06.476360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.427 [2024-04-24 01:58:06.476453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:06.427 [2024-04-24 01:58:06.476714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.427 [2024-04-24 01:58:06.479568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.427 [2024-04-24 01:58:06.479750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:06.427 [2024-04-24 01:58:06.480001] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:06.427 [2024-04-24 01:58:06.480155] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:06.427 pt1 00:28:06.427 01:58:06 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:06.427 01:58:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.428 01:58:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.993 01:58:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:06.993 "name": "raid_bdev1", 00:28:06.993 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:06.993 "strip_size_kb": 0, 00:28:06.993 "state": "configuring", 00:28:06.993 "raid_level": "raid1", 00:28:06.993 "superblock": true, 00:28:06.993 "num_base_bdevs": 3, 00:28:06.993 "num_base_bdevs_discovered": 1, 00:28:06.993 "num_base_bdevs_operational": 3, 00:28:06.993 "base_bdevs_list": [ 00:28:06.993 { 00:28:06.993 "name": "pt1", 00:28:06.993 "uuid": "a6da1c4d-374e-5e97-a9dd-f11b64dee2c1", 00:28:06.993 "is_configured": true, 00:28:06.993 "data_offset": 2048, 00:28:06.993 "data_size": 63488 00:28:06.993 }, 00:28:06.993 { 00:28:06.993 "name": null, 00:28:06.993 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:06.993 "is_configured": false, 00:28:06.993 "data_offset": 2048, 00:28:06.993 "data_size": 63488 00:28:06.993 }, 00:28:06.993 { 00:28:06.993 "name": null, 00:28:06.993 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:06.993 "is_configured": false, 00:28:06.993 "data_offset": 2048, 00:28:06.993 "data_size": 63488 00:28:06.993 } 00:28:06.993 ] 00:28:06.993 }' 00:28:06.993 01:58:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:06.993 01:58:06 -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:07.589 01:58:07 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:07.862 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:07.862 01:58:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:07.862 01:58:07 -- bdev/bdev_raid.sh@489 -- # i=2 00:28:07.862 01:58:07 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:08.124 [2024-04-24 01:58:08.108475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:08.124 [2024-04-24 01:58:08.108816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.124 [2024-04-24 01:58:08.108903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:08.124 [2024-04-24 01:58:08.109114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.124 [2024-04-24 01:58:08.109788] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.124 [2024-04-24 01:58:08.109990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:08.124 [2024-04-24 01:58:08.110333] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:08.124 [2024-04-24 01:58:08.110447] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:08.124 [2024-04-24 01:58:08.110525] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:08.124 [2024-04-24 01:58:08.110578] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:28:08.124 [2024-04-24 01:58:08.110824] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:08.124 pt3 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:08.124 01:58:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:08.125 01:58:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:08.125 01:58:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:08.125 01:58:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.125 01:58:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.382 01:58:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:08.382 "name": "raid_bdev1", 00:28:08.382 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:08.382 "strip_size_kb": 0, 00:28:08.382 "state": "configuring", 00:28:08.382 "raid_level": "raid1", 00:28:08.382 "superblock": true, 00:28:08.382 "num_base_bdevs": 3, 00:28:08.382 "num_base_bdevs_discovered": 1, 00:28:08.382 "num_base_bdevs_operational": 2, 00:28:08.382 "base_bdevs_list": [ 00:28:08.382 { 00:28:08.382 "name": null, 00:28:08.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.382 "is_configured": false, 00:28:08.382 "data_offset": 2048, 00:28:08.382 "data_size": 63488 00:28:08.382 }, 00:28:08.382 { 00:28:08.382 "name": null, 00:28:08.382 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:08.382 "is_configured": false, 00:28:08.382 "data_offset": 2048, 00:28:08.382 "data_size": 63488 00:28:08.382 }, 00:28:08.382 { 00:28:08.382 "name": "pt3", 00:28:08.382 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:08.382 "is_configured": true, 00:28:08.382 "data_offset": 2048, 00:28:08.382 "data_size": 63488 00:28:08.382 } 00:28:08.382 ] 00:28:08.382 }' 00:28:08.382 01:58:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:08.382 01:58:08 -- common/autotest_common.sh@10 -- # set +x 00:28:08.947 01:58:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:28:09.206 01:58:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:09.206 01:58:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:09.465 [2024-04-24 01:58:09.308758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:09.465 [2024-04-24 01:58:09.309128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.465 [2024-04-24 01:58:09.309286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:09.465 [2024-04-24 01:58:09.309402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.465 [2024-04-24 01:58:09.310125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.465 [2024-04-24 01:58:09.310318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:09.465 [2024-04-24 01:58:09.310576] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:09.465 [2024-04-24 01:58:09.310706] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:09.465 [2024-04-24 01:58:09.310923] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:28:09.465 [2024-04-24 01:58:09.311026] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:09.465 [2024-04-24 01:58:09.311264] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:09.465 [2024-04-24 01:58:09.311786] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:28:09.465 [2024-04-24 01:58:09.311908] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:28:09.465 [2024-04-24 01:58:09.312187] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.465 pt2 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.465 01:58:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.723 01:58:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:09.723 "name": "raid_bdev1", 00:28:09.723 "uuid": "eadb8286-408e-4c8e-8e7f-d4eafe66da1b", 00:28:09.723 "strip_size_kb": 0, 00:28:09.723 "state": "online", 00:28:09.723 "raid_level": "raid1", 00:28:09.723 "superblock": true, 00:28:09.723 "num_base_bdevs": 3, 00:28:09.723 "num_base_bdevs_discovered": 2, 00:28:09.723 "num_base_bdevs_operational": 2, 00:28:09.723 "base_bdevs_list": [ 00:28:09.723 { 00:28:09.723 "name": null, 00:28:09.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.723 "is_configured": false, 00:28:09.723 "data_offset": 2048, 00:28:09.723 "data_size": 63488 00:28:09.723 }, 00:28:09.723 { 00:28:09.723 "name": "pt2", 00:28:09.723 "uuid": "8014cd94-2bff-5d8f-95e5-d2f0ea711695", 00:28:09.723 "is_configured": true, 00:28:09.723 "data_offset": 2048, 00:28:09.723 "data_size": 63488 00:28:09.723 }, 00:28:09.723 { 00:28:09.723 "name": "pt3", 00:28:09.723 "uuid": "861365c8-3cb8-5a56-bf3e-b14b8b5fa65f", 00:28:09.723 "is_configured": true, 00:28:09.723 "data_offset": 2048, 00:28:09.723 "data_size": 63488 00:28:09.723 } 00:28:09.723 ] 00:28:09.723 }' 00:28:09.723 01:58:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:09.723 01:58:09 -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 01:58:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:10.289 01:58:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:28:10.555 [2024-04-24 01:58:10.569243] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:10.555 01:58:10 -- bdev/bdev_raid.sh@506 -- # '[' eadb8286-408e-4c8e-8e7f-d4eafe66da1b '!=' eadb8286-408e-4c8e-8e7f-d4eafe66da1b ']' 00:28:10.555 01:58:10 -- bdev/bdev_raid.sh@511 -- # killprocess 126310 00:28:10.555 01:58:10 -- common/autotest_common.sh@936 -- # '[' -z 126310 ']' 00:28:10.555 01:58:10 -- common/autotest_common.sh@940 -- # kill -0 126310 00:28:10.555 01:58:10 -- common/autotest_common.sh@941 -- # uname 00:28:10.555 01:58:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.555 01:58:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126310 00:28:10.555 01:58:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.555 01:58:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.555 01:58:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126310' 00:28:10.555 killing process with pid 126310 00:28:10.555 01:58:10 -- common/autotest_common.sh@955 -- # kill 126310 00:28:10.555 [2024-04-24 01:58:10.620681] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:10.555 01:58:10 -- common/autotest_common.sh@960 -- # wait 126310 00:28:10.555 [2024-04-24 01:58:10.620917] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:10.555 [2024-04-24 01:58:10.621124] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:10.555 [2024-04-24 01:58:10.621251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:28:11.123 [2024-04-24 01:58:10.946895] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:28:12.498 00:28:12.498 real 0m20.484s 00:28:12.498 user 0m36.236s 00:28:12.498 sys 0m3.064s 00:28:12.498 01:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:12.498 01:58:12 -- common/autotest_common.sh@10 -- # set +x 00:28:12.498 ************************************ 00:28:12.498 END TEST raid_superblock_test 00:28:12.498 ************************************ 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:28:12.498 01:58:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:28:12.498 01:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:12.498 01:58:12 -- common/autotest_common.sh@10 -- # set +x 00:28:12.498 ************************************ 00:28:12.498 START TEST raid_state_function_test 00:28:12.498 ************************************ 00:28:12.498 01:58:12 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 false 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=126938 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126938' 00:28:12.498 Process raid pid: 126938 00:28:12.498 01:58:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126938 /var/tmp/spdk-raid.sock 00:28:12.498 01:58:12 -- common/autotest_common.sh@817 -- # '[' -z 126938 ']' 00:28:12.498 01:58:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:12.498 01:58:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:12.498 01:58:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:12.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:12.498 01:58:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:12.498 01:58:12 -- common/autotest_common.sh@10 -- # set +x 00:28:12.498 [2024-04-24 01:58:12.565386] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:28:12.498 [2024-04-24 01:58:12.565660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.756 [2024-04-24 01:58:12.746830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.015 [2024-04-24 01:58:13.011487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.273 [2024-04-24 01:58:13.276552] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:13.543 01:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:13.543 01:58:13 -- common/autotest_common.sh@850 -- # return 0 00:28:13.543 01:58:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:13.802 [2024-04-24 01:58:13.835192] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:13.802 [2024-04-24 01:58:13.835318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:13.802 [2024-04-24 01:58:13.835340] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:13.802 [2024-04-24 01:58:13.835380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:13.802 [2024-04-24 01:58:13.835395] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:13.802 [2024-04-24 01:58:13.835454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:13.802 [2024-04-24 01:58:13.835469] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:13.802 [2024-04-24 01:58:13.835503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.802 01:58:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.373 01:58:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:14.373 "name": "Existed_Raid", 00:28:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.373 "strip_size_kb": 64, 00:28:14.373 "state": "configuring", 00:28:14.373 "raid_level": "raid0", 00:28:14.373 "superblock": false, 00:28:14.373 "num_base_bdevs": 4, 00:28:14.373 "num_base_bdevs_discovered": 0, 00:28:14.373 "num_base_bdevs_operational": 4, 00:28:14.373 "base_bdevs_list": [ 00:28:14.373 { 00:28:14.373 "name": "BaseBdev1", 00:28:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.373 "is_configured": false, 00:28:14.373 "data_offset": 0, 00:28:14.373 "data_size": 0 00:28:14.373 }, 00:28:14.373 { 00:28:14.373 "name": "BaseBdev2", 00:28:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.373 "is_configured": false, 00:28:14.373 "data_offset": 0, 00:28:14.373 "data_size": 0 00:28:14.373 }, 00:28:14.373 { 00:28:14.373 "name": "BaseBdev3", 00:28:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.373 "is_configured": false, 00:28:14.373 "data_offset": 0, 00:28:14.373 "data_size": 0 00:28:14.373 }, 00:28:14.373 { 00:28:14.373 "name": "BaseBdev4", 00:28:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.373 "is_configured": false, 00:28:14.373 "data_offset": 0, 00:28:14.373 "data_size": 0 00:28:14.373 } 00:28:14.373 ] 00:28:14.373 }' 00:28:14.373 01:58:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:14.373 01:58:14 -- common/autotest_common.sh@10 -- # set +x 00:28:14.940 01:58:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:15.199 [2024-04-24 01:58:15.051197] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:15.199 [2024-04-24 01:58:15.051248] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:28:15.199 01:58:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:15.199 [2024-04-24 01:58:15.259237] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:15.199 [2024-04-24 01:58:15.259298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:15.199 [2024-04-24 01:58:15.259307] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:15.199 [2024-04-24 01:58:15.259330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:15.199 [2024-04-24 01:58:15.259338] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:15.199 [2024-04-24 01:58:15.259381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:15.199 [2024-04-24 01:58:15.259387] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:15.199 [2024-04-24 01:58:15.259409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:15.199 01:58:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:15.766 [2024-04-24 01:58:15.569401] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:15.766 BaseBdev1 00:28:15.766 01:58:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:28:15.766 01:58:15 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:28:15.766 01:58:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:15.766 01:58:15 -- common/autotest_common.sh@887 -- # local i 00:28:15.766 01:58:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:15.766 01:58:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:15.766 01:58:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:15.766 01:58:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:16.025 [ 00:28:16.025 { 00:28:16.025 "name": "BaseBdev1", 00:28:16.025 "aliases": [ 00:28:16.025 "1e76805d-48f1-4d76-b707-ff402a628c9c" 00:28:16.025 ], 00:28:16.025 "product_name": "Malloc disk", 00:28:16.025 "block_size": 512, 00:28:16.025 "num_blocks": 65536, 00:28:16.025 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:16.025 "assigned_rate_limits": { 00:28:16.025 "rw_ios_per_sec": 0, 00:28:16.025 "rw_mbytes_per_sec": 0, 00:28:16.025 "r_mbytes_per_sec": 0, 00:28:16.025 "w_mbytes_per_sec": 0 00:28:16.025 }, 00:28:16.025 "claimed": true, 00:28:16.025 "claim_type": "exclusive_write", 00:28:16.025 "zoned": false, 00:28:16.025 "supported_io_types": { 00:28:16.025 "read": true, 00:28:16.025 "write": true, 00:28:16.025 "unmap": true, 00:28:16.025 "write_zeroes": true, 00:28:16.025 "flush": true, 00:28:16.025 "reset": true, 00:28:16.025 "compare": false, 00:28:16.025 "compare_and_write": false, 00:28:16.025 "abort": true, 00:28:16.025 "nvme_admin": false, 00:28:16.025 "nvme_io": false 00:28:16.025 }, 00:28:16.025 "memory_domains": [ 00:28:16.025 { 00:28:16.025 "dma_device_id": "system", 00:28:16.025 "dma_device_type": 1 00:28:16.025 }, 00:28:16.025 { 00:28:16.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.025 "dma_device_type": 2 00:28:16.025 } 00:28:16.025 ], 00:28:16.025 "driver_specific": {} 00:28:16.025 } 00:28:16.025 ] 00:28:16.025 01:58:16 -- common/autotest_common.sh@893 -- # return 0 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.025 01:58:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.284 01:58:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:16.284 "name": "Existed_Raid", 00:28:16.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.284 "strip_size_kb": 64, 00:28:16.284 "state": "configuring", 00:28:16.284 "raid_level": "raid0", 00:28:16.284 "superblock": false, 00:28:16.284 "num_base_bdevs": 4, 00:28:16.284 "num_base_bdevs_discovered": 1, 00:28:16.284 "num_base_bdevs_operational": 4, 00:28:16.284 "base_bdevs_list": [ 00:28:16.284 { 00:28:16.284 "name": "BaseBdev1", 00:28:16.284 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:16.284 "is_configured": true, 00:28:16.284 "data_offset": 0, 00:28:16.284 "data_size": 65536 00:28:16.284 }, 00:28:16.284 { 00:28:16.284 "name": "BaseBdev2", 00:28:16.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.284 "is_configured": false, 00:28:16.284 "data_offset": 0, 00:28:16.284 "data_size": 0 00:28:16.284 }, 00:28:16.285 { 00:28:16.285 "name": "BaseBdev3", 00:28:16.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.285 "is_configured": false, 00:28:16.285 "data_offset": 0, 00:28:16.285 "data_size": 0 00:28:16.285 }, 00:28:16.285 { 00:28:16.285 "name": "BaseBdev4", 00:28:16.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.285 "is_configured": false, 00:28:16.285 "data_offset": 0, 00:28:16.285 "data_size": 0 00:28:16.285 } 00:28:16.285 ] 00:28:16.285 }' 00:28:16.285 01:58:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:16.285 01:58:16 -- common/autotest_common.sh@10 -- # set +x 00:28:16.852 01:58:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:17.111 [2024-04-24 01:58:16.989707] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:17.111 [2024-04-24 01:58:16.989766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:28:17.111 01:58:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:28:17.111 01:58:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:17.369 [2024-04-24 01:58:17.241836] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:17.369 [2024-04-24 01:58:17.244084] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:17.369 [2024-04-24 01:58:17.244180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:17.369 [2024-04-24 01:58:17.244192] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:17.369 [2024-04-24 01:58:17.244219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:17.369 [2024-04-24 01:58:17.244228] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:17.369 [2024-04-24 01:58:17.244247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:17.369 01:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:17.370 01:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.370 01:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.628 01:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:17.628 "name": "Existed_Raid", 00:28:17.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.628 "strip_size_kb": 64, 00:28:17.628 "state": "configuring", 00:28:17.628 "raid_level": "raid0", 00:28:17.628 "superblock": false, 00:28:17.628 "num_base_bdevs": 4, 00:28:17.628 "num_base_bdevs_discovered": 1, 00:28:17.628 "num_base_bdevs_operational": 4, 00:28:17.628 "base_bdevs_list": [ 00:28:17.628 { 00:28:17.628 "name": "BaseBdev1", 00:28:17.628 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:17.628 "is_configured": true, 00:28:17.628 "data_offset": 0, 00:28:17.628 "data_size": 65536 00:28:17.628 }, 00:28:17.628 { 00:28:17.628 "name": "BaseBdev2", 00:28:17.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.628 "is_configured": false, 00:28:17.628 "data_offset": 0, 00:28:17.628 "data_size": 0 00:28:17.628 }, 00:28:17.628 { 00:28:17.628 "name": "BaseBdev3", 00:28:17.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.628 "is_configured": false, 00:28:17.628 "data_offset": 0, 00:28:17.628 "data_size": 0 00:28:17.628 }, 00:28:17.628 { 00:28:17.628 "name": "BaseBdev4", 00:28:17.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.628 "is_configured": false, 00:28:17.628 "data_offset": 0, 00:28:17.628 "data_size": 0 00:28:17.628 } 00:28:17.628 ] 00:28:17.628 }' 00:28:17.628 01:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:17.628 01:58:17 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 01:58:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:18.463 [2024-04-24 01:58:18.407343] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:18.463 BaseBdev2 00:28:18.463 01:58:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:28:18.463 01:58:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:28:18.463 01:58:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:18.463 01:58:18 -- common/autotest_common.sh@887 -- # local i 00:28:18.463 01:58:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:18.463 01:58:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:18.463 01:58:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:18.721 01:58:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:18.978 [ 00:28:18.978 { 00:28:18.978 "name": "BaseBdev2", 00:28:18.978 "aliases": [ 00:28:18.978 "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f" 00:28:18.978 ], 00:28:18.978 "product_name": "Malloc disk", 00:28:18.978 "block_size": 512, 00:28:18.978 "num_blocks": 65536, 00:28:18.978 "uuid": "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f", 00:28:18.978 "assigned_rate_limits": { 00:28:18.978 "rw_ios_per_sec": 0, 00:28:18.978 "rw_mbytes_per_sec": 0, 00:28:18.978 "r_mbytes_per_sec": 0, 00:28:18.978 "w_mbytes_per_sec": 0 00:28:18.978 }, 00:28:18.978 "claimed": true, 00:28:18.978 "claim_type": "exclusive_write", 00:28:18.978 "zoned": false, 00:28:18.978 "supported_io_types": { 00:28:18.978 "read": true, 00:28:18.978 "write": true, 00:28:18.978 "unmap": true, 00:28:18.978 "write_zeroes": true, 00:28:18.978 "flush": true, 00:28:18.978 "reset": true, 00:28:18.978 "compare": false, 00:28:18.978 "compare_and_write": false, 00:28:18.978 "abort": true, 00:28:18.978 "nvme_admin": false, 00:28:18.978 "nvme_io": false 00:28:18.978 }, 00:28:18.978 "memory_domains": [ 00:28:18.978 { 00:28:18.978 "dma_device_id": "system", 00:28:18.978 "dma_device_type": 1 00:28:18.978 }, 00:28:18.978 { 00:28:18.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.978 "dma_device_type": 2 00:28:18.978 } 00:28:18.978 ], 00:28:18.978 "driver_specific": {} 00:28:18.978 } 00:28:18.978 ] 00:28:18.978 01:58:19 -- common/autotest_common.sh@893 -- # return 0 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.978 01:58:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.236 01:58:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:19.236 "name": "Existed_Raid", 00:28:19.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.236 "strip_size_kb": 64, 00:28:19.236 "state": "configuring", 00:28:19.236 "raid_level": "raid0", 00:28:19.236 "superblock": false, 00:28:19.236 "num_base_bdevs": 4, 00:28:19.236 "num_base_bdevs_discovered": 2, 00:28:19.236 "num_base_bdevs_operational": 4, 00:28:19.236 "base_bdevs_list": [ 00:28:19.236 { 00:28:19.236 "name": "BaseBdev1", 00:28:19.236 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:19.236 "is_configured": true, 00:28:19.236 "data_offset": 0, 00:28:19.236 "data_size": 65536 00:28:19.236 }, 00:28:19.236 { 00:28:19.236 "name": "BaseBdev2", 00:28:19.236 "uuid": "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f", 00:28:19.236 "is_configured": true, 00:28:19.236 "data_offset": 0, 00:28:19.236 "data_size": 65536 00:28:19.236 }, 00:28:19.236 { 00:28:19.236 "name": "BaseBdev3", 00:28:19.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.236 "is_configured": false, 00:28:19.236 "data_offset": 0, 00:28:19.236 "data_size": 0 00:28:19.236 }, 00:28:19.236 { 00:28:19.236 "name": "BaseBdev4", 00:28:19.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.236 "is_configured": false, 00:28:19.236 "data_offset": 0, 00:28:19.236 "data_size": 0 00:28:19.236 } 00:28:19.236 ] 00:28:19.236 }' 00:28:19.236 01:58:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:19.236 01:58:19 -- common/autotest_common.sh@10 -- # set +x 00:28:20.167 01:58:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:20.167 [2024-04-24 01:58:20.211448] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:20.167 BaseBdev3 00:28:20.167 01:58:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:28:20.167 01:58:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:28:20.167 01:58:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:20.167 01:58:20 -- common/autotest_common.sh@887 -- # local i 00:28:20.167 01:58:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:20.167 01:58:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:20.167 01:58:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:20.425 01:58:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:20.741 [ 00:28:20.741 { 00:28:20.741 "name": "BaseBdev3", 00:28:20.741 "aliases": [ 00:28:20.741 "4069397e-2e0e-477b-ba7c-75bb232e6504" 00:28:20.741 ], 00:28:20.741 "product_name": "Malloc disk", 00:28:20.741 "block_size": 512, 00:28:20.741 "num_blocks": 65536, 00:28:20.741 "uuid": "4069397e-2e0e-477b-ba7c-75bb232e6504", 00:28:20.741 "assigned_rate_limits": { 00:28:20.741 "rw_ios_per_sec": 0, 00:28:20.741 "rw_mbytes_per_sec": 0, 00:28:20.741 "r_mbytes_per_sec": 0, 00:28:20.741 "w_mbytes_per_sec": 0 00:28:20.741 }, 00:28:20.741 "claimed": true, 00:28:20.741 "claim_type": "exclusive_write", 00:28:20.741 "zoned": false, 00:28:20.741 "supported_io_types": { 00:28:20.741 "read": true, 00:28:20.741 "write": true, 00:28:20.741 "unmap": true, 00:28:20.741 "write_zeroes": true, 00:28:20.741 "flush": true, 00:28:20.741 "reset": true, 00:28:20.741 "compare": false, 00:28:20.741 "compare_and_write": false, 00:28:20.741 "abort": true, 00:28:20.741 "nvme_admin": false, 00:28:20.741 "nvme_io": false 00:28:20.741 }, 00:28:20.741 "memory_domains": [ 00:28:20.741 { 00:28:20.741 "dma_device_id": "system", 00:28:20.741 "dma_device_type": 1 00:28:20.741 }, 00:28:20.741 { 00:28:20.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.741 "dma_device_type": 2 00:28:20.741 } 00:28:20.741 ], 00:28:20.741 "driver_specific": {} 00:28:20.741 } 00:28:20.741 ] 00:28:20.741 01:58:20 -- common/autotest_common.sh@893 -- # return 0 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.741 01:58:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.003 01:58:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:21.003 "name": "Existed_Raid", 00:28:21.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.003 "strip_size_kb": 64, 00:28:21.003 "state": "configuring", 00:28:21.003 "raid_level": "raid0", 00:28:21.003 "superblock": false, 00:28:21.003 "num_base_bdevs": 4, 00:28:21.003 "num_base_bdevs_discovered": 3, 00:28:21.003 "num_base_bdevs_operational": 4, 00:28:21.003 "base_bdevs_list": [ 00:28:21.003 { 00:28:21.003 "name": "BaseBdev1", 00:28:21.003 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:21.003 "is_configured": true, 00:28:21.003 "data_offset": 0, 00:28:21.003 "data_size": 65536 00:28:21.003 }, 00:28:21.003 { 00:28:21.004 "name": "BaseBdev2", 00:28:21.004 "uuid": "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f", 00:28:21.004 "is_configured": true, 00:28:21.004 "data_offset": 0, 00:28:21.004 "data_size": 65536 00:28:21.004 }, 00:28:21.004 { 00:28:21.004 "name": "BaseBdev3", 00:28:21.004 "uuid": "4069397e-2e0e-477b-ba7c-75bb232e6504", 00:28:21.004 "is_configured": true, 00:28:21.004 "data_offset": 0, 00:28:21.004 "data_size": 65536 00:28:21.004 }, 00:28:21.004 { 00:28:21.004 "name": "BaseBdev4", 00:28:21.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.004 "is_configured": false, 00:28:21.004 "data_offset": 0, 00:28:21.004 "data_size": 0 00:28:21.004 } 00:28:21.004 ] 00:28:21.004 }' 00:28:21.004 01:58:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:21.004 01:58:20 -- common/autotest_common.sh@10 -- # set +x 00:28:21.570 01:58:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:21.829 [2024-04-24 01:58:21.725964] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:21.829 [2024-04-24 01:58:21.726042] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:21.829 [2024-04-24 01:58:21.726057] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:28:21.829 [2024-04-24 01:58:21.726201] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:28:21.829 [2024-04-24 01:58:21.726591] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:21.829 [2024-04-24 01:58:21.726615] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:28:21.829 [2024-04-24 01:58:21.726877] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.829 BaseBdev4 00:28:21.829 01:58:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:28:21.829 01:58:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:28:21.829 01:58:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:21.829 01:58:21 -- common/autotest_common.sh@887 -- # local i 00:28:21.829 01:58:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:21.829 01:58:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:21.829 01:58:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:22.087 01:58:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:22.346 [ 00:28:22.346 { 00:28:22.346 "name": "BaseBdev4", 00:28:22.346 "aliases": [ 00:28:22.346 "5328e477-adf9-4cb7-b0ac-4462db94dd25" 00:28:22.346 ], 00:28:22.346 "product_name": "Malloc disk", 00:28:22.346 "block_size": 512, 00:28:22.346 "num_blocks": 65536, 00:28:22.346 "uuid": "5328e477-adf9-4cb7-b0ac-4462db94dd25", 00:28:22.346 "assigned_rate_limits": { 00:28:22.346 "rw_ios_per_sec": 0, 00:28:22.346 "rw_mbytes_per_sec": 0, 00:28:22.346 "r_mbytes_per_sec": 0, 00:28:22.346 "w_mbytes_per_sec": 0 00:28:22.346 }, 00:28:22.346 "claimed": true, 00:28:22.346 "claim_type": "exclusive_write", 00:28:22.346 "zoned": false, 00:28:22.346 "supported_io_types": { 00:28:22.346 "read": true, 00:28:22.346 "write": true, 00:28:22.346 "unmap": true, 00:28:22.346 "write_zeroes": true, 00:28:22.346 "flush": true, 00:28:22.346 "reset": true, 00:28:22.346 "compare": false, 00:28:22.346 "compare_and_write": false, 00:28:22.346 "abort": true, 00:28:22.346 "nvme_admin": false, 00:28:22.346 "nvme_io": false 00:28:22.346 }, 00:28:22.346 "memory_domains": [ 00:28:22.346 { 00:28:22.346 "dma_device_id": "system", 00:28:22.346 "dma_device_type": 1 00:28:22.346 }, 00:28:22.346 { 00:28:22.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.346 "dma_device_type": 2 00:28:22.346 } 00:28:22.346 ], 00:28:22.346 "driver_specific": {} 00:28:22.346 } 00:28:22.346 ] 00:28:22.346 01:58:22 -- common/autotest_common.sh@893 -- # return 0 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.346 01:58:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.605 01:58:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:22.605 "name": "Existed_Raid", 00:28:22.605 "uuid": "87f19b47-cc1c-42a7-9f32-2e708a82a847", 00:28:22.605 "strip_size_kb": 64, 00:28:22.605 "state": "online", 00:28:22.605 "raid_level": "raid0", 00:28:22.605 "superblock": false, 00:28:22.605 "num_base_bdevs": 4, 00:28:22.605 "num_base_bdevs_discovered": 4, 00:28:22.605 "num_base_bdevs_operational": 4, 00:28:22.605 "base_bdevs_list": [ 00:28:22.605 { 00:28:22.605 "name": "BaseBdev1", 00:28:22.605 "uuid": "1e76805d-48f1-4d76-b707-ff402a628c9c", 00:28:22.605 "is_configured": true, 00:28:22.605 "data_offset": 0, 00:28:22.605 "data_size": 65536 00:28:22.605 }, 00:28:22.605 { 00:28:22.605 "name": "BaseBdev2", 00:28:22.605 "uuid": "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f", 00:28:22.605 "is_configured": true, 00:28:22.605 "data_offset": 0, 00:28:22.605 "data_size": 65536 00:28:22.605 }, 00:28:22.605 { 00:28:22.605 "name": "BaseBdev3", 00:28:22.605 "uuid": "4069397e-2e0e-477b-ba7c-75bb232e6504", 00:28:22.605 "is_configured": true, 00:28:22.605 "data_offset": 0, 00:28:22.605 "data_size": 65536 00:28:22.605 }, 00:28:22.605 { 00:28:22.605 "name": "BaseBdev4", 00:28:22.605 "uuid": "5328e477-adf9-4cb7-b0ac-4462db94dd25", 00:28:22.605 "is_configured": true, 00:28:22.605 "data_offset": 0, 00:28:22.605 "data_size": 65536 00:28:22.605 } 00:28:22.605 ] 00:28:22.605 }' 00:28:22.605 01:58:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:22.605 01:58:22 -- common/autotest_common.sh@10 -- # set +x 00:28:23.218 01:58:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:23.478 [2024-04-24 01:58:23.454458] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:23.478 [2024-04-24 01:58:23.454495] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.478 [2024-04-24 01:58:23.454548] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.735 01:58:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:28:23.735 01:58:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:28:23.735 01:58:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.736 01:58:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:23.993 01:58:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:23.993 "name": "Existed_Raid", 00:28:23.993 "uuid": "87f19b47-cc1c-42a7-9f32-2e708a82a847", 00:28:23.993 "strip_size_kb": 64, 00:28:23.993 "state": "offline", 00:28:23.993 "raid_level": "raid0", 00:28:23.993 "superblock": false, 00:28:23.993 "num_base_bdevs": 4, 00:28:23.993 "num_base_bdevs_discovered": 3, 00:28:23.993 "num_base_bdevs_operational": 3, 00:28:23.994 "base_bdevs_list": [ 00:28:23.994 { 00:28:23.994 "name": null, 00:28:23.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.994 "is_configured": false, 00:28:23.994 "data_offset": 0, 00:28:23.994 "data_size": 65536 00:28:23.994 }, 00:28:23.994 { 00:28:23.994 "name": "BaseBdev2", 00:28:23.994 "uuid": "ba0a7ef6-cf40-4335-8214-60a2bcca2d9f", 00:28:23.994 "is_configured": true, 00:28:23.994 "data_offset": 0, 00:28:23.994 "data_size": 65536 00:28:23.994 }, 00:28:23.994 { 00:28:23.994 "name": "BaseBdev3", 00:28:23.994 "uuid": "4069397e-2e0e-477b-ba7c-75bb232e6504", 00:28:23.994 "is_configured": true, 00:28:23.994 "data_offset": 0, 00:28:23.994 "data_size": 65536 00:28:23.994 }, 00:28:23.994 { 00:28:23.994 "name": "BaseBdev4", 00:28:23.994 "uuid": "5328e477-adf9-4cb7-b0ac-4462db94dd25", 00:28:23.994 "is_configured": true, 00:28:23.994 "data_offset": 0, 00:28:23.994 "data_size": 65536 00:28:23.994 } 00:28:23.994 ] 00:28:23.994 }' 00:28:23.994 01:58:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:23.994 01:58:23 -- common/autotest_common.sh@10 -- # set +x 00:28:24.561 01:58:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:28:24.561 01:58:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:24.561 01:58:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.561 01:58:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:24.819 01:58:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:24.819 01:58:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:24.819 01:58:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:25.077 [2024-04-24 01:58:24.918588] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:25.077 01:58:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:25.077 01:58:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:25.077 01:58:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.077 01:58:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:25.336 01:58:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:25.336 01:58:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:25.336 01:58:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:25.594 [2024-04-24 01:58:25.596436] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:25.852 01:58:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:26.110 [2024-04-24 01:58:26.192620] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:26.110 [2024-04-24 01:58:26.192680] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:28:26.368 01:58:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:26.368 01:58:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:26.368 01:58:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.368 01:58:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:28:26.625 01:58:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:28:26.625 01:58:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:28:26.626 01:58:26 -- bdev/bdev_raid.sh@287 -- # killprocess 126938 00:28:26.626 01:58:26 -- common/autotest_common.sh@936 -- # '[' -z 126938 ']' 00:28:26.626 01:58:26 -- common/autotest_common.sh@940 -- # kill -0 126938 00:28:26.626 01:58:26 -- common/autotest_common.sh@941 -- # uname 00:28:26.626 01:58:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:26.626 01:58:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126938 00:28:26.626 killing process with pid 126938 00:28:26.626 01:58:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:26.626 01:58:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:26.626 01:58:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126938' 00:28:26.626 01:58:26 -- common/autotest_common.sh@955 -- # kill 126938 00:28:26.626 01:58:26 -- common/autotest_common.sh@960 -- # wait 126938 00:28:26.626 [2024-04-24 01:58:26.599727] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.626 [2024-04-24 01:58:26.599870] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:28.000 01:58:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:28:28.000 00:28:28.000 real 0m15.541s 00:28:28.000 user 0m26.858s 00:28:28.000 sys 0m2.123s 00:28:28.000 01:58:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:28.000 01:58:28 -- common/autotest_common.sh@10 -- # set +x 00:28:28.000 ************************************ 00:28:28.000 END TEST raid_state_function_test 00:28:28.000 ************************************ 00:28:28.000 01:58:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:28:28.000 01:58:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:28:28.000 01:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:28.000 01:58:28 -- common/autotest_common.sh@10 -- # set +x 00:28:28.258 ************************************ 00:28:28.258 START TEST raid_state_function_test_sb 00:28:28.258 ************************************ 00:28:28.258 01:58:28 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 true 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:28:28.258 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=127392 00:28:28.259 Process raid pid: 127392 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127392' 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:28.259 01:58:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127392 /var/tmp/spdk-raid.sock 00:28:28.259 01:58:28 -- common/autotest_common.sh@817 -- # '[' -z 127392 ']' 00:28:28.259 01:58:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:28.259 01:58:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:28.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:28.259 01:58:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:28.259 01:58:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:28.259 01:58:28 -- common/autotest_common.sh@10 -- # set +x 00:28:28.259 [2024-04-24 01:58:28.215826] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:28:28.259 [2024-04-24 01:58:28.216036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.517 [2024-04-24 01:58:28.394470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.776 [2024-04-24 01:58:28.645894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.034 [2024-04-24 01:58:28.912746] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:29.293 01:58:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:29.293 01:58:29 -- common/autotest_common.sh@850 -- # return 0 00:28:29.293 01:58:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:29.551 [2024-04-24 01:58:29.418411] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:29.551 [2024-04-24 01:58:29.418489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:29.551 [2024-04-24 01:58:29.418501] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:29.551 [2024-04-24 01:58:29.418524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:29.551 [2024-04-24 01:58:29.418532] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:29.551 [2024-04-24 01:58:29.418571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:29.552 [2024-04-24 01:58:29.418579] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:29.552 [2024-04-24 01:58:29.418603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.552 01:58:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:29.811 01:58:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:29.811 "name": "Existed_Raid", 00:28:29.811 "uuid": "571d49d6-1ba8-4fdf-93d5-bd9401bfc827", 00:28:29.811 "strip_size_kb": 64, 00:28:29.811 "state": "configuring", 00:28:29.811 "raid_level": "raid0", 00:28:29.811 "superblock": true, 00:28:29.811 "num_base_bdevs": 4, 00:28:29.811 "num_base_bdevs_discovered": 0, 00:28:29.811 "num_base_bdevs_operational": 4, 00:28:29.811 "base_bdevs_list": [ 00:28:29.811 { 00:28:29.811 "name": "BaseBdev1", 00:28:29.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.811 "is_configured": false, 00:28:29.811 "data_offset": 0, 00:28:29.811 "data_size": 0 00:28:29.811 }, 00:28:29.811 { 00:28:29.811 "name": "BaseBdev2", 00:28:29.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.811 "is_configured": false, 00:28:29.811 "data_offset": 0, 00:28:29.811 "data_size": 0 00:28:29.811 }, 00:28:29.811 { 00:28:29.811 "name": "BaseBdev3", 00:28:29.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.811 "is_configured": false, 00:28:29.811 "data_offset": 0, 00:28:29.811 "data_size": 0 00:28:29.811 }, 00:28:29.811 { 00:28:29.811 "name": "BaseBdev4", 00:28:29.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.811 "is_configured": false, 00:28:29.811 "data_offset": 0, 00:28:29.811 "data_size": 0 00:28:29.811 } 00:28:29.811 ] 00:28:29.811 }' 00:28:29.811 01:58:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:29.811 01:58:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.379 01:58:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:30.637 [2024-04-24 01:58:30.547878] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:30.637 [2024-04-24 01:58:30.548201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:28:30.637 01:58:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:30.981 [2024-04-24 01:58:30.827977] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:30.981 [2024-04-24 01:58:30.828295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:30.981 [2024-04-24 01:58:30.828396] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:30.981 [2024-04-24 01:58:30.828462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:30.981 [2024-04-24 01:58:30.828559] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:30.981 [2024-04-24 01:58:30.828647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:30.981 [2024-04-24 01:58:30.828731] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:30.982 [2024-04-24 01:58:30.828792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:30.982 01:58:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:31.240 [2024-04-24 01:58:31.067501] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:31.240 BaseBdev1 00:28:31.240 01:58:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:28:31.240 01:58:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:28:31.240 01:58:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:31.240 01:58:31 -- common/autotest_common.sh@887 -- # local i 00:28:31.240 01:58:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:31.240 01:58:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:31.240 01:58:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:31.240 01:58:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:31.497 [ 00:28:31.497 { 00:28:31.497 "name": "BaseBdev1", 00:28:31.497 "aliases": [ 00:28:31.497 "9f964115-6cdc-4747-8450-29340778c5ea" 00:28:31.497 ], 00:28:31.497 "product_name": "Malloc disk", 00:28:31.497 "block_size": 512, 00:28:31.497 "num_blocks": 65536, 00:28:31.497 "uuid": "9f964115-6cdc-4747-8450-29340778c5ea", 00:28:31.497 "assigned_rate_limits": { 00:28:31.497 "rw_ios_per_sec": 0, 00:28:31.497 "rw_mbytes_per_sec": 0, 00:28:31.497 "r_mbytes_per_sec": 0, 00:28:31.497 "w_mbytes_per_sec": 0 00:28:31.497 }, 00:28:31.497 "claimed": true, 00:28:31.497 "claim_type": "exclusive_write", 00:28:31.497 "zoned": false, 00:28:31.497 "supported_io_types": { 00:28:31.497 "read": true, 00:28:31.497 "write": true, 00:28:31.497 "unmap": true, 00:28:31.497 "write_zeroes": true, 00:28:31.497 "flush": true, 00:28:31.497 "reset": true, 00:28:31.497 "compare": false, 00:28:31.497 "compare_and_write": false, 00:28:31.497 "abort": true, 00:28:31.497 "nvme_admin": false, 00:28:31.497 "nvme_io": false 00:28:31.497 }, 00:28:31.497 "memory_domains": [ 00:28:31.497 { 00:28:31.497 "dma_device_id": "system", 00:28:31.497 "dma_device_type": 1 00:28:31.497 }, 00:28:31.497 { 00:28:31.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:31.497 "dma_device_type": 2 00:28:31.497 } 00:28:31.497 ], 00:28:31.497 "driver_specific": {} 00:28:31.497 } 00:28:31.497 ] 00:28:31.497 01:58:31 -- common/autotest_common.sh@893 -- # return 0 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:31.497 01:58:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:31.498 01:58:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:31.498 01:58:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:31.498 01:58:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.498 01:58:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.755 01:58:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:31.755 "name": "Existed_Raid", 00:28:31.755 "uuid": "fa005152-2b1a-4d01-9cad-656721e1c79b", 00:28:31.755 "strip_size_kb": 64, 00:28:31.755 "state": "configuring", 00:28:31.755 "raid_level": "raid0", 00:28:31.755 "superblock": true, 00:28:31.755 "num_base_bdevs": 4, 00:28:31.755 "num_base_bdevs_discovered": 1, 00:28:31.755 "num_base_bdevs_operational": 4, 00:28:31.755 "base_bdevs_list": [ 00:28:31.755 { 00:28:31.755 "name": "BaseBdev1", 00:28:31.755 "uuid": "9f964115-6cdc-4747-8450-29340778c5ea", 00:28:31.755 "is_configured": true, 00:28:31.755 "data_offset": 2048, 00:28:31.755 "data_size": 63488 00:28:31.755 }, 00:28:31.755 { 00:28:31.755 "name": "BaseBdev2", 00:28:31.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.755 "is_configured": false, 00:28:31.755 "data_offset": 0, 00:28:31.755 "data_size": 0 00:28:31.755 }, 00:28:31.755 { 00:28:31.755 "name": "BaseBdev3", 00:28:31.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.755 "is_configured": false, 00:28:31.755 "data_offset": 0, 00:28:31.755 "data_size": 0 00:28:31.755 }, 00:28:31.755 { 00:28:31.755 "name": "BaseBdev4", 00:28:31.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.755 "is_configured": false, 00:28:31.755 "data_offset": 0, 00:28:31.755 "data_size": 0 00:28:31.755 } 00:28:31.755 ] 00:28:31.755 }' 00:28:31.755 01:58:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:31.755 01:58:31 -- common/autotest_common.sh@10 -- # set +x 00:28:32.321 01:58:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:32.580 [2024-04-24 01:58:32.531827] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:32.580 [2024-04-24 01:58:32.532120] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:28:32.580 01:58:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:28:32.580 01:58:32 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:32.838 01:58:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:33.097 BaseBdev1 00:28:33.097 01:58:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:28:33.097 01:58:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:28:33.097 01:58:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:33.097 01:58:33 -- common/autotest_common.sh@887 -- # local i 00:28:33.097 01:58:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:33.097 01:58:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:33.097 01:58:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:33.356 01:58:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:33.614 [ 00:28:33.614 { 00:28:33.614 "name": "BaseBdev1", 00:28:33.614 "aliases": [ 00:28:33.614 "6837855d-fc89-4942-b07c-3fe96dcd6119" 00:28:33.614 ], 00:28:33.614 "product_name": "Malloc disk", 00:28:33.614 "block_size": 512, 00:28:33.614 "num_blocks": 65536, 00:28:33.614 "uuid": "6837855d-fc89-4942-b07c-3fe96dcd6119", 00:28:33.614 "assigned_rate_limits": { 00:28:33.614 "rw_ios_per_sec": 0, 00:28:33.614 "rw_mbytes_per_sec": 0, 00:28:33.614 "r_mbytes_per_sec": 0, 00:28:33.614 "w_mbytes_per_sec": 0 00:28:33.614 }, 00:28:33.614 "claimed": false, 00:28:33.614 "zoned": false, 00:28:33.614 "supported_io_types": { 00:28:33.614 "read": true, 00:28:33.614 "write": true, 00:28:33.614 "unmap": true, 00:28:33.614 "write_zeroes": true, 00:28:33.614 "flush": true, 00:28:33.614 "reset": true, 00:28:33.614 "compare": false, 00:28:33.614 "compare_and_write": false, 00:28:33.614 "abort": true, 00:28:33.614 "nvme_admin": false, 00:28:33.614 "nvme_io": false 00:28:33.614 }, 00:28:33.614 "memory_domains": [ 00:28:33.614 { 00:28:33.614 "dma_device_id": "system", 00:28:33.614 "dma_device_type": 1 00:28:33.614 }, 00:28:33.614 { 00:28:33.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.614 "dma_device_type": 2 00:28:33.614 } 00:28:33.614 ], 00:28:33.614 "driver_specific": {} 00:28:33.615 } 00:28:33.615 ] 00:28:33.615 01:58:33 -- common/autotest_common.sh@893 -- # return 0 00:28:33.615 01:58:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:33.615 [2024-04-24 01:58:33.695713] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:33.615 [2024-04-24 01:58:33.698092] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:33.615 [2024-04-24 01:58:33.698282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:33.615 [2024-04-24 01:58:33.698374] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:33.615 [2024-04-24 01:58:33.698433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:33.615 [2024-04-24 01:58:33.698509] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:33.615 [2024-04-24 01:58:33.698560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.872 01:58:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:34.130 01:58:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:34.130 "name": "Existed_Raid", 00:28:34.130 "uuid": "10c1e24e-0422-4566-b325-6569f9ae42d4", 00:28:34.130 "strip_size_kb": 64, 00:28:34.130 "state": "configuring", 00:28:34.130 "raid_level": "raid0", 00:28:34.130 "superblock": true, 00:28:34.130 "num_base_bdevs": 4, 00:28:34.130 "num_base_bdevs_discovered": 1, 00:28:34.130 "num_base_bdevs_operational": 4, 00:28:34.130 "base_bdevs_list": [ 00:28:34.130 { 00:28:34.130 "name": "BaseBdev1", 00:28:34.130 "uuid": "6837855d-fc89-4942-b07c-3fe96dcd6119", 00:28:34.130 "is_configured": true, 00:28:34.130 "data_offset": 2048, 00:28:34.130 "data_size": 63488 00:28:34.130 }, 00:28:34.130 { 00:28:34.130 "name": "BaseBdev2", 00:28:34.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.130 "is_configured": false, 00:28:34.130 "data_offset": 0, 00:28:34.130 "data_size": 0 00:28:34.130 }, 00:28:34.130 { 00:28:34.130 "name": "BaseBdev3", 00:28:34.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.130 "is_configured": false, 00:28:34.130 "data_offset": 0, 00:28:34.130 "data_size": 0 00:28:34.130 }, 00:28:34.130 { 00:28:34.130 "name": "BaseBdev4", 00:28:34.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.130 "is_configured": false, 00:28:34.130 "data_offset": 0, 00:28:34.130 "data_size": 0 00:28:34.130 } 00:28:34.130 ] 00:28:34.130 }' 00:28:34.130 01:58:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:34.130 01:58:33 -- common/autotest_common.sh@10 -- # set +x 00:28:34.695 01:58:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:34.953 [2024-04-24 01:58:34.949292] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:34.953 BaseBdev2 00:28:34.953 01:58:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:28:34.953 01:58:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:28:34.953 01:58:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:34.953 01:58:34 -- common/autotest_common.sh@887 -- # local i 00:28:34.953 01:58:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:34.953 01:58:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:34.953 01:58:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:35.211 01:58:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:35.469 [ 00:28:35.469 { 00:28:35.469 "name": "BaseBdev2", 00:28:35.469 "aliases": [ 00:28:35.469 "751b2077-ae83-47a9-88de-6a3022caf543" 00:28:35.469 ], 00:28:35.469 "product_name": "Malloc disk", 00:28:35.469 "block_size": 512, 00:28:35.469 "num_blocks": 65536, 00:28:35.469 "uuid": "751b2077-ae83-47a9-88de-6a3022caf543", 00:28:35.469 "assigned_rate_limits": { 00:28:35.469 "rw_ios_per_sec": 0, 00:28:35.469 "rw_mbytes_per_sec": 0, 00:28:35.469 "r_mbytes_per_sec": 0, 00:28:35.469 "w_mbytes_per_sec": 0 00:28:35.469 }, 00:28:35.469 "claimed": true, 00:28:35.469 "claim_type": "exclusive_write", 00:28:35.469 "zoned": false, 00:28:35.469 "supported_io_types": { 00:28:35.469 "read": true, 00:28:35.469 "write": true, 00:28:35.469 "unmap": true, 00:28:35.469 "write_zeroes": true, 00:28:35.469 "flush": true, 00:28:35.469 "reset": true, 00:28:35.469 "compare": false, 00:28:35.469 "compare_and_write": false, 00:28:35.469 "abort": true, 00:28:35.469 "nvme_admin": false, 00:28:35.469 "nvme_io": false 00:28:35.469 }, 00:28:35.469 "memory_domains": [ 00:28:35.469 { 00:28:35.469 "dma_device_id": "system", 00:28:35.469 "dma_device_type": 1 00:28:35.469 }, 00:28:35.469 { 00:28:35.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:35.469 "dma_device_type": 2 00:28:35.469 } 00:28:35.469 ], 00:28:35.469 "driver_specific": {} 00:28:35.469 } 00:28:35.469 ] 00:28:35.469 01:58:35 -- common/autotest_common.sh@893 -- # return 0 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.469 01:58:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:35.727 01:58:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:35.727 "name": "Existed_Raid", 00:28:35.727 "uuid": "10c1e24e-0422-4566-b325-6569f9ae42d4", 00:28:35.727 "strip_size_kb": 64, 00:28:35.727 "state": "configuring", 00:28:35.727 "raid_level": "raid0", 00:28:35.727 "superblock": true, 00:28:35.727 "num_base_bdevs": 4, 00:28:35.727 "num_base_bdevs_discovered": 2, 00:28:35.727 "num_base_bdevs_operational": 4, 00:28:35.727 "base_bdevs_list": [ 00:28:35.727 { 00:28:35.727 "name": "BaseBdev1", 00:28:35.727 "uuid": "6837855d-fc89-4942-b07c-3fe96dcd6119", 00:28:35.727 "is_configured": true, 00:28:35.727 "data_offset": 2048, 00:28:35.727 "data_size": 63488 00:28:35.727 }, 00:28:35.727 { 00:28:35.727 "name": "BaseBdev2", 00:28:35.727 "uuid": "751b2077-ae83-47a9-88de-6a3022caf543", 00:28:35.727 "is_configured": true, 00:28:35.727 "data_offset": 2048, 00:28:35.727 "data_size": 63488 00:28:35.727 }, 00:28:35.727 { 00:28:35.727 "name": "BaseBdev3", 00:28:35.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.727 "is_configured": false, 00:28:35.727 "data_offset": 0, 00:28:35.727 "data_size": 0 00:28:35.727 }, 00:28:35.727 { 00:28:35.727 "name": "BaseBdev4", 00:28:35.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.727 "is_configured": false, 00:28:35.727 "data_offset": 0, 00:28:35.727 "data_size": 0 00:28:35.727 } 00:28:35.727 ] 00:28:35.727 }' 00:28:35.727 01:58:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:35.727 01:58:35 -- common/autotest_common.sh@10 -- # set +x 00:28:36.293 01:58:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:36.552 [2024-04-24 01:58:36.605031] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:36.552 BaseBdev3 00:28:36.552 01:58:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:28:36.552 01:58:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:28:36.552 01:58:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:36.552 01:58:36 -- common/autotest_common.sh@887 -- # local i 00:28:36.552 01:58:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:36.552 01:58:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:36.552 01:58:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:36.811 01:58:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:37.070 [ 00:28:37.070 { 00:28:37.070 "name": "BaseBdev3", 00:28:37.070 "aliases": [ 00:28:37.070 "edab6e9c-756d-4487-be90-02fe8607827d" 00:28:37.070 ], 00:28:37.070 "product_name": "Malloc disk", 00:28:37.070 "block_size": 512, 00:28:37.070 "num_blocks": 65536, 00:28:37.070 "uuid": "edab6e9c-756d-4487-be90-02fe8607827d", 00:28:37.070 "assigned_rate_limits": { 00:28:37.070 "rw_ios_per_sec": 0, 00:28:37.070 "rw_mbytes_per_sec": 0, 00:28:37.070 "r_mbytes_per_sec": 0, 00:28:37.070 "w_mbytes_per_sec": 0 00:28:37.070 }, 00:28:37.070 "claimed": true, 00:28:37.070 "claim_type": "exclusive_write", 00:28:37.070 "zoned": false, 00:28:37.070 "supported_io_types": { 00:28:37.070 "read": true, 00:28:37.070 "write": true, 00:28:37.070 "unmap": true, 00:28:37.070 "write_zeroes": true, 00:28:37.070 "flush": true, 00:28:37.070 "reset": true, 00:28:37.070 "compare": false, 00:28:37.070 "compare_and_write": false, 00:28:37.070 "abort": true, 00:28:37.070 "nvme_admin": false, 00:28:37.070 "nvme_io": false 00:28:37.070 }, 00:28:37.070 "memory_domains": [ 00:28:37.070 { 00:28:37.070 "dma_device_id": "system", 00:28:37.070 "dma_device_type": 1 00:28:37.070 }, 00:28:37.070 { 00:28:37.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.070 "dma_device_type": 2 00:28:37.070 } 00:28:37.070 ], 00:28:37.070 "driver_specific": {} 00:28:37.070 } 00:28:37.070 ] 00:28:37.070 01:58:37 -- common/autotest_common.sh@893 -- # return 0 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.070 01:58:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.329 01:58:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:37.329 "name": "Existed_Raid", 00:28:37.329 "uuid": "10c1e24e-0422-4566-b325-6569f9ae42d4", 00:28:37.329 "strip_size_kb": 64, 00:28:37.329 "state": "configuring", 00:28:37.329 "raid_level": "raid0", 00:28:37.329 "superblock": true, 00:28:37.329 "num_base_bdevs": 4, 00:28:37.329 "num_base_bdevs_discovered": 3, 00:28:37.329 "num_base_bdevs_operational": 4, 00:28:37.329 "base_bdevs_list": [ 00:28:37.329 { 00:28:37.329 "name": "BaseBdev1", 00:28:37.329 "uuid": "6837855d-fc89-4942-b07c-3fe96dcd6119", 00:28:37.329 "is_configured": true, 00:28:37.329 "data_offset": 2048, 00:28:37.329 "data_size": 63488 00:28:37.329 }, 00:28:37.329 { 00:28:37.329 "name": "BaseBdev2", 00:28:37.329 "uuid": "751b2077-ae83-47a9-88de-6a3022caf543", 00:28:37.329 "is_configured": true, 00:28:37.329 "data_offset": 2048, 00:28:37.329 "data_size": 63488 00:28:37.329 }, 00:28:37.329 { 00:28:37.329 "name": "BaseBdev3", 00:28:37.329 "uuid": "edab6e9c-756d-4487-be90-02fe8607827d", 00:28:37.329 "is_configured": true, 00:28:37.329 "data_offset": 2048, 00:28:37.329 "data_size": 63488 00:28:37.329 }, 00:28:37.329 { 00:28:37.329 "name": "BaseBdev4", 00:28:37.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.329 "is_configured": false, 00:28:37.329 "data_offset": 0, 00:28:37.329 "data_size": 0 00:28:37.329 } 00:28:37.329 ] 00:28:37.329 }' 00:28:37.329 01:58:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:37.329 01:58:37 -- common/autotest_common.sh@10 -- # set +x 00:28:37.896 01:58:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:38.464 [2024-04-24 01:58:38.287854] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:38.464 [2024-04-24 01:58:38.288379] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:38.464 [2024-04-24 01:58:38.288513] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:28:38.464 [2024-04-24 01:58:38.288711] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:28:38.464 [2024-04-24 01:58:38.289150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:38.464 BaseBdev4 00:28:38.464 [2024-04-24 01:58:38.289285] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:28:38.464 [2024-04-24 01:58:38.289547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.464 01:58:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:28:38.464 01:58:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:28:38.464 01:58:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:38.464 01:58:38 -- common/autotest_common.sh@887 -- # local i 00:28:38.464 01:58:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:38.464 01:58:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:38.464 01:58:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:38.464 01:58:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:38.722 [ 00:28:38.722 { 00:28:38.722 "name": "BaseBdev4", 00:28:38.722 "aliases": [ 00:28:38.722 "af97cbd9-ffa1-45bd-be83-ea766712fd96" 00:28:38.722 ], 00:28:38.722 "product_name": "Malloc disk", 00:28:38.722 "block_size": 512, 00:28:38.722 "num_blocks": 65536, 00:28:38.722 "uuid": "af97cbd9-ffa1-45bd-be83-ea766712fd96", 00:28:38.722 "assigned_rate_limits": { 00:28:38.722 "rw_ios_per_sec": 0, 00:28:38.722 "rw_mbytes_per_sec": 0, 00:28:38.722 "r_mbytes_per_sec": 0, 00:28:38.722 "w_mbytes_per_sec": 0 00:28:38.722 }, 00:28:38.722 "claimed": true, 00:28:38.722 "claim_type": "exclusive_write", 00:28:38.722 "zoned": false, 00:28:38.722 "supported_io_types": { 00:28:38.722 "read": true, 00:28:38.722 "write": true, 00:28:38.722 "unmap": true, 00:28:38.722 "write_zeroes": true, 00:28:38.722 "flush": true, 00:28:38.722 "reset": true, 00:28:38.722 "compare": false, 00:28:38.722 "compare_and_write": false, 00:28:38.722 "abort": true, 00:28:38.722 "nvme_admin": false, 00:28:38.722 "nvme_io": false 00:28:38.722 }, 00:28:38.722 "memory_domains": [ 00:28:38.722 { 00:28:38.722 "dma_device_id": "system", 00:28:38.722 "dma_device_type": 1 00:28:38.722 }, 00:28:38.722 { 00:28:38.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.722 "dma_device_type": 2 00:28:38.722 } 00:28:38.722 ], 00:28:38.722 "driver_specific": {} 00:28:38.722 } 00:28:38.722 ] 00:28:38.722 01:58:38 -- common/autotest_common.sh@893 -- # return 0 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.722 01:58:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.981 01:58:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:38.981 "name": "Existed_Raid", 00:28:38.981 "uuid": "10c1e24e-0422-4566-b325-6569f9ae42d4", 00:28:38.981 "strip_size_kb": 64, 00:28:38.981 "state": "online", 00:28:38.981 "raid_level": "raid0", 00:28:38.981 "superblock": true, 00:28:38.981 "num_base_bdevs": 4, 00:28:38.981 "num_base_bdevs_discovered": 4, 00:28:38.981 "num_base_bdevs_operational": 4, 00:28:38.981 "base_bdevs_list": [ 00:28:38.981 { 00:28:38.981 "name": "BaseBdev1", 00:28:38.981 "uuid": "6837855d-fc89-4942-b07c-3fe96dcd6119", 00:28:38.981 "is_configured": true, 00:28:38.981 "data_offset": 2048, 00:28:38.981 "data_size": 63488 00:28:38.981 }, 00:28:38.981 { 00:28:38.981 "name": "BaseBdev2", 00:28:38.981 "uuid": "751b2077-ae83-47a9-88de-6a3022caf543", 00:28:38.981 "is_configured": true, 00:28:38.981 "data_offset": 2048, 00:28:38.981 "data_size": 63488 00:28:38.981 }, 00:28:38.981 { 00:28:38.981 "name": "BaseBdev3", 00:28:38.981 "uuid": "edab6e9c-756d-4487-be90-02fe8607827d", 00:28:38.981 "is_configured": true, 00:28:38.981 "data_offset": 2048, 00:28:38.981 "data_size": 63488 00:28:38.981 }, 00:28:38.981 { 00:28:38.981 "name": "BaseBdev4", 00:28:38.981 "uuid": "af97cbd9-ffa1-45bd-be83-ea766712fd96", 00:28:38.981 "is_configured": true, 00:28:38.981 "data_offset": 2048, 00:28:38.981 "data_size": 63488 00:28:38.981 } 00:28:38.981 ] 00:28:38.981 }' 00:28:38.981 01:58:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:38.981 01:58:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.548 01:58:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:39.807 [2024-04-24 01:58:39.836330] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:39.807 [2024-04-24 01:58:39.836517] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:39.807 [2024-04-24 01:58:39.836666] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:40.067 01:58:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.325 01:58:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:40.325 "name": "Existed_Raid", 00:28:40.325 "uuid": "10c1e24e-0422-4566-b325-6569f9ae42d4", 00:28:40.325 "strip_size_kb": 64, 00:28:40.325 "state": "offline", 00:28:40.325 "raid_level": "raid0", 00:28:40.325 "superblock": true, 00:28:40.325 "num_base_bdevs": 4, 00:28:40.325 "num_base_bdevs_discovered": 3, 00:28:40.325 "num_base_bdevs_operational": 3, 00:28:40.325 "base_bdevs_list": [ 00:28:40.325 { 00:28:40.325 "name": null, 00:28:40.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.325 "is_configured": false, 00:28:40.325 "data_offset": 2048, 00:28:40.325 "data_size": 63488 00:28:40.325 }, 00:28:40.325 { 00:28:40.325 "name": "BaseBdev2", 00:28:40.325 "uuid": "751b2077-ae83-47a9-88de-6a3022caf543", 00:28:40.325 "is_configured": true, 00:28:40.325 "data_offset": 2048, 00:28:40.325 "data_size": 63488 00:28:40.325 }, 00:28:40.325 { 00:28:40.325 "name": "BaseBdev3", 00:28:40.325 "uuid": "edab6e9c-756d-4487-be90-02fe8607827d", 00:28:40.325 "is_configured": true, 00:28:40.325 "data_offset": 2048, 00:28:40.325 "data_size": 63488 00:28:40.325 }, 00:28:40.325 { 00:28:40.325 "name": "BaseBdev4", 00:28:40.325 "uuid": "af97cbd9-ffa1-45bd-be83-ea766712fd96", 00:28:40.325 "is_configured": true, 00:28:40.325 "data_offset": 2048, 00:28:40.325 "data_size": 63488 00:28:40.325 } 00:28:40.325 ] 00:28:40.325 }' 00:28:40.325 01:58:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:40.325 01:58:40 -- common/autotest_common.sh@10 -- # set +x 00:28:40.893 01:58:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:28:40.893 01:58:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:40.893 01:58:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:40.893 01:58:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.151 01:58:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:41.151 01:58:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:41.151 01:58:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:41.409 [2024-04-24 01:58:41.388998] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:41.667 01:58:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:41.667 01:58:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:41.667 01:58:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.667 01:58:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:41.925 01:58:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:41.925 01:58:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:41.925 01:58:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:42.183 [2024-04-24 01:58:42.070744] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:42.183 01:58:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:42.183 01:58:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:42.183 01:58:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:42.183 01:58:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.441 01:58:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:42.441 01:58:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:42.441 01:58:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:42.699 [2024-04-24 01:58:42.676724] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:42.699 [2024-04-24 01:58:42.676988] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:28:42.957 01:58:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:42.958 01:58:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:42.958 01:58:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.958 01:58:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:28:43.216 01:58:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:28:43.216 01:58:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:28:43.216 01:58:43 -- bdev/bdev_raid.sh@287 -- # killprocess 127392 00:28:43.216 01:58:43 -- common/autotest_common.sh@936 -- # '[' -z 127392 ']' 00:28:43.216 01:58:43 -- common/autotest_common.sh@940 -- # kill -0 127392 00:28:43.216 01:58:43 -- common/autotest_common.sh@941 -- # uname 00:28:43.216 01:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.216 01:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127392 00:28:43.217 killing process with pid 127392 00:28:43.217 01:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:43.217 01:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:43.217 01:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127392' 00:28:43.217 01:58:43 -- common/autotest_common.sh@955 -- # kill 127392 00:28:43.217 [2024-04-24 01:58:43.067709] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:43.217 01:58:43 -- common/autotest_common.sh@960 -- # wait 127392 00:28:43.217 [2024-04-24 01:58:43.067841] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:44.593 ************************************ 00:28:44.593 END TEST raid_state_function_test_sb 00:28:44.593 ************************************ 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:28:44.593 00:28:44.593 real 0m16.361s 00:28:44.593 user 0m28.254s 00:28:44.593 sys 0m2.150s 00:28:44.593 01:58:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:44.593 01:58:44 -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:28:44.593 01:58:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:44.593 01:58:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:44.593 01:58:44 -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 ************************************ 00:28:44.593 START TEST raid_superblock_test 00:28:44.593 ************************************ 00:28:44.593 01:58:44 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 4 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=127864 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:44.593 01:58:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127864 /var/tmp/spdk-raid.sock 00:28:44.593 01:58:44 -- common/autotest_common.sh@817 -- # '[' -z 127864 ']' 00:28:44.593 01:58:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:44.593 01:58:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:44.593 01:58:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:44.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:44.593 01:58:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:44.593 01:58:44 -- common/autotest_common.sh@10 -- # set +x 00:28:44.593 [2024-04-24 01:58:44.662967] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:28:44.593 [2024-04-24 01:58:44.663138] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127864 ] 00:28:44.851 [2024-04-24 01:58:44.824582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.110 [2024-04-24 01:58:45.048062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.368 [2024-04-24 01:58:45.291115] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:45.625 01:58:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:45.625 01:58:45 -- common/autotest_common.sh@850 -- # return 0 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:45.625 01:58:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:45.882 malloc1 00:28:45.882 01:58:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:46.140 [2024-04-24 01:58:46.024904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:46.140 [2024-04-24 01:58:46.025007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.140 [2024-04-24 01:58:46.025046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:28:46.140 [2024-04-24 01:58:46.025086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.140 [2024-04-24 01:58:46.027469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.140 [2024-04-24 01:58:46.027525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:46.140 pt1 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:46.140 01:58:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:46.397 malloc2 00:28:46.397 01:58:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:46.653 [2024-04-24 01:58:46.539969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:46.653 [2024-04-24 01:58:46.540063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.653 [2024-04-24 01:58:46.540106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:46.653 [2024-04-24 01:58:46.540165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.653 [2024-04-24 01:58:46.542408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.653 [2024-04-24 01:58:46.542459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:46.653 pt2 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:46.653 01:58:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:46.910 malloc3 00:28:46.910 01:58:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:47.168 [2024-04-24 01:58:47.065854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:47.168 [2024-04-24 01:58:47.065952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.168 [2024-04-24 01:58:47.066017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:47.168 [2024-04-24 01:58:47.066066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.168 [2024-04-24 01:58:47.068583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.168 [2024-04-24 01:58:47.068644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:47.168 pt3 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:47.168 01:58:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:47.426 malloc4 00:28:47.426 01:58:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:47.683 [2024-04-24 01:58:47.690710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:47.683 [2024-04-24 01:58:47.690809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.683 [2024-04-24 01:58:47.690845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:47.683 [2024-04-24 01:58:47.690887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.683 [2024-04-24 01:58:47.693397] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.683 [2024-04-24 01:58:47.693452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:47.683 pt4 00:28:47.683 01:58:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:47.683 01:58:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:47.683 01:58:47 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:47.941 [2024-04-24 01:58:47.902781] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:47.941 [2024-04-24 01:58:47.905024] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:47.941 [2024-04-24 01:58:47.905099] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:47.941 [2024-04-24 01:58:47.905177] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:47.941 [2024-04-24 01:58:47.905394] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:28:47.941 [2024-04-24 01:58:47.905406] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:28:47.941 [2024-04-24 01:58:47.905545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:47.941 [2024-04-24 01:58:47.905903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:28:47.941 [2024-04-24 01:58:47.905915] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:28:47.941 [2024-04-24 01:58:47.906137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.941 01:58:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.199 01:58:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:48.199 "name": "raid_bdev1", 00:28:48.199 "uuid": "b2502758-f577-445c-8747-5c64f394348e", 00:28:48.199 "strip_size_kb": 64, 00:28:48.199 "state": "online", 00:28:48.199 "raid_level": "raid0", 00:28:48.199 "superblock": true, 00:28:48.199 "num_base_bdevs": 4, 00:28:48.199 "num_base_bdevs_discovered": 4, 00:28:48.199 "num_base_bdevs_operational": 4, 00:28:48.199 "base_bdevs_list": [ 00:28:48.199 { 00:28:48.199 "name": "pt1", 00:28:48.199 "uuid": "d580cbc8-f53b-5cbc-94f2-aa947488c904", 00:28:48.199 "is_configured": true, 00:28:48.199 "data_offset": 2048, 00:28:48.199 "data_size": 63488 00:28:48.199 }, 00:28:48.199 { 00:28:48.199 "name": "pt2", 00:28:48.199 "uuid": "ecb9b741-699b-57b0-bf5b-3812a51e6376", 00:28:48.199 "is_configured": true, 00:28:48.199 "data_offset": 2048, 00:28:48.199 "data_size": 63488 00:28:48.199 }, 00:28:48.199 { 00:28:48.199 "name": "pt3", 00:28:48.199 "uuid": "d696899e-dd58-572b-8bd3-a5dcdceb8db6", 00:28:48.199 "is_configured": true, 00:28:48.199 "data_offset": 2048, 00:28:48.199 "data_size": 63488 00:28:48.199 }, 00:28:48.199 { 00:28:48.199 "name": "pt4", 00:28:48.199 "uuid": "ce280504-66fe-5af9-b895-0c0d60a8067c", 00:28:48.199 "is_configured": true, 00:28:48.199 "data_offset": 2048, 00:28:48.199 "data_size": 63488 00:28:48.199 } 00:28:48.199 ] 00:28:48.199 }' 00:28:48.199 01:58:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:48.199 01:58:48 -- common/autotest_common.sh@10 -- # set +x 00:28:48.804 01:58:48 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:48.804 01:58:48 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:28:49.063 [2024-04-24 01:58:49.075297] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:49.063 01:58:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b2502758-f577-445c-8747-5c64f394348e 00:28:49.063 01:58:49 -- bdev/bdev_raid.sh@380 -- # '[' -z b2502758-f577-445c-8747-5c64f394348e ']' 00:28:49.063 01:58:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:49.321 [2024-04-24 01:58:49.375021] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:49.321 [2024-04-24 01:58:49.375073] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:49.321 [2024-04-24 01:58:49.375155] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:49.321 [2024-04-24 01:58:49.375221] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:49.321 [2024-04-24 01:58:49.375231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:28:49.321 01:58:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:28:49.321 01:58:49 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.887 01:58:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:50.451 01:58:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:50.452 01:58:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:50.709 01:58:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:50.709 01:58:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:50.709 01:58:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:50.709 01:58:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:50.968 01:58:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:28:50.968 01:58:51 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:50.968 01:58:51 -- common/autotest_common.sh@638 -- # local es=0 00:28:50.968 01:58:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:50.968 01:58:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.968 01:58:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.968 01:58:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.968 01:58:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.968 01:58:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.968 01:58:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.968 01:58:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.968 01:58:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:50.968 01:58:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:51.228 [2024-04-24 01:58:51.223367] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:51.228 [2024-04-24 01:58:51.225450] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:51.228 [2024-04-24 01:58:51.225504] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:51.228 [2024-04-24 01:58:51.225541] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:51.228 [2024-04-24 01:58:51.225588] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:28:51.228 [2024-04-24 01:58:51.225657] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:28:51.228 [2024-04-24 01:58:51.225704] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:28:51.228 [2024-04-24 01:58:51.225761] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:28:51.228 [2024-04-24 01:58:51.225785] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:51.228 [2024-04-24 01:58:51.225796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:28:51.228 request: 00:28:51.228 { 00:28:51.228 "name": "raid_bdev1", 00:28:51.228 "raid_level": "raid0", 00:28:51.228 "base_bdevs": [ 00:28:51.228 "malloc1", 00:28:51.228 "malloc2", 00:28:51.228 "malloc3", 00:28:51.228 "malloc4" 00:28:51.228 ], 00:28:51.228 "superblock": false, 00:28:51.228 "strip_size_kb": 64, 00:28:51.228 "method": "bdev_raid_create", 00:28:51.228 "req_id": 1 00:28:51.228 } 00:28:51.228 Got JSON-RPC error response 00:28:51.228 response: 00:28:51.228 { 00:28:51.228 "code": -17, 00:28:51.228 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:51.228 } 00:28:51.228 01:58:51 -- common/autotest_common.sh@641 -- # es=1 00:28:51.228 01:58:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:51.228 01:58:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:51.228 01:58:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:51.228 01:58:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.228 01:58:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:28:51.487 01:58:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:28:51.487 01:58:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:28:51.487 01:58:51 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:51.745 [2024-04-24 01:58:51.639364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:51.745 [2024-04-24 01:58:51.639444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.745 [2024-04-24 01:58:51.639477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:28:51.745 [2024-04-24 01:58:51.639505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.745 [2024-04-24 01:58:51.642144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.745 [2024-04-24 01:58:51.642239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:51.745 [2024-04-24 01:58:51.642360] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:51.745 [2024-04-24 01:58:51.642420] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:51.745 pt1 00:28:51.745 01:58:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:28:51.745 01:58:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:51.745 01:58:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:51.745 01:58:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.746 01:58:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.005 01:58:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:52.005 "name": "raid_bdev1", 00:28:52.005 "uuid": "b2502758-f577-445c-8747-5c64f394348e", 00:28:52.005 "strip_size_kb": 64, 00:28:52.005 "state": "configuring", 00:28:52.005 "raid_level": "raid0", 00:28:52.005 "superblock": true, 00:28:52.005 "num_base_bdevs": 4, 00:28:52.005 "num_base_bdevs_discovered": 1, 00:28:52.005 "num_base_bdevs_operational": 4, 00:28:52.005 "base_bdevs_list": [ 00:28:52.005 { 00:28:52.005 "name": "pt1", 00:28:52.005 "uuid": "d580cbc8-f53b-5cbc-94f2-aa947488c904", 00:28:52.005 "is_configured": true, 00:28:52.005 "data_offset": 2048, 00:28:52.005 "data_size": 63488 00:28:52.005 }, 00:28:52.005 { 00:28:52.005 "name": null, 00:28:52.005 "uuid": "ecb9b741-699b-57b0-bf5b-3812a51e6376", 00:28:52.005 "is_configured": false, 00:28:52.005 "data_offset": 2048, 00:28:52.005 "data_size": 63488 00:28:52.005 }, 00:28:52.005 { 00:28:52.005 "name": null, 00:28:52.005 "uuid": "d696899e-dd58-572b-8bd3-a5dcdceb8db6", 00:28:52.005 "is_configured": false, 00:28:52.005 "data_offset": 2048, 00:28:52.005 "data_size": 63488 00:28:52.005 }, 00:28:52.005 { 00:28:52.005 "name": null, 00:28:52.005 "uuid": "ce280504-66fe-5af9-b895-0c0d60a8067c", 00:28:52.005 "is_configured": false, 00:28:52.005 "data_offset": 2048, 00:28:52.005 "data_size": 63488 00:28:52.005 } 00:28:52.005 ] 00:28:52.005 }' 00:28:52.005 01:58:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:52.005 01:58:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.572 01:58:52 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:28:52.572 01:58:52 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:52.830 [2024-04-24 01:58:52.763633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:52.830 [2024-04-24 01:58:52.763728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.830 [2024-04-24 01:58:52.763800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:52.830 [2024-04-24 01:58:52.763824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.830 [2024-04-24 01:58:52.764356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.830 [2024-04-24 01:58:52.764400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:52.830 [2024-04-24 01:58:52.764525] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:52.830 [2024-04-24 01:58:52.764547] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:52.830 pt2 00:28:52.830 01:58:52 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:53.088 [2024-04-24 01:58:52.979705] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.088 01:58:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.347 01:58:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:53.347 "name": "raid_bdev1", 00:28:53.347 "uuid": "b2502758-f577-445c-8747-5c64f394348e", 00:28:53.347 "strip_size_kb": 64, 00:28:53.347 "state": "configuring", 00:28:53.347 "raid_level": "raid0", 00:28:53.347 "superblock": true, 00:28:53.347 "num_base_bdevs": 4, 00:28:53.347 "num_base_bdevs_discovered": 1, 00:28:53.347 "num_base_bdevs_operational": 4, 00:28:53.347 "base_bdevs_list": [ 00:28:53.347 { 00:28:53.347 "name": "pt1", 00:28:53.347 "uuid": "d580cbc8-f53b-5cbc-94f2-aa947488c904", 00:28:53.347 "is_configured": true, 00:28:53.347 "data_offset": 2048, 00:28:53.347 "data_size": 63488 00:28:53.347 }, 00:28:53.347 { 00:28:53.347 "name": null, 00:28:53.347 "uuid": "ecb9b741-699b-57b0-bf5b-3812a51e6376", 00:28:53.347 "is_configured": false, 00:28:53.347 "data_offset": 2048, 00:28:53.347 "data_size": 63488 00:28:53.347 }, 00:28:53.347 { 00:28:53.347 "name": null, 00:28:53.347 "uuid": "d696899e-dd58-572b-8bd3-a5dcdceb8db6", 00:28:53.347 "is_configured": false, 00:28:53.347 "data_offset": 2048, 00:28:53.347 "data_size": 63488 00:28:53.347 }, 00:28:53.347 { 00:28:53.347 "name": null, 00:28:53.347 "uuid": "ce280504-66fe-5af9-b895-0c0d60a8067c", 00:28:53.347 "is_configured": false, 00:28:53.347 "data_offset": 2048, 00:28:53.347 "data_size": 63488 00:28:53.347 } 00:28:53.347 ] 00:28:53.347 }' 00:28:53.347 01:58:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:53.347 01:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:53.915 01:58:53 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:28:53.915 01:58:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:53.915 01:58:53 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:54.174 [2024-04-24 01:58:54.027935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:54.174 [2024-04-24 01:58:54.028017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.174 [2024-04-24 01:58:54.028055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:54.174 [2024-04-24 01:58:54.028078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.174 [2024-04-24 01:58:54.028568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.174 [2024-04-24 01:58:54.028619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:54.174 [2024-04-24 01:58:54.028735] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:54.174 [2024-04-24 01:58:54.028756] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:54.174 pt2 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:54.174 [2024-04-24 01:58:54.235996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:54.174 [2024-04-24 01:58:54.236083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.174 [2024-04-24 01:58:54.236131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:54.174 [2024-04-24 01:58:54.236160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.174 [2024-04-24 01:58:54.236632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.174 [2024-04-24 01:58:54.236697] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:54.174 [2024-04-24 01:58:54.236819] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:54.174 [2024-04-24 01:58:54.236841] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:54.174 pt3 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:54.174 01:58:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:54.432 [2024-04-24 01:58:54.500079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:54.432 [2024-04-24 01:58:54.500203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.432 [2024-04-24 01:58:54.500244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:54.432 [2024-04-24 01:58:54.500273] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.432 [2024-04-24 01:58:54.500719] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.432 [2024-04-24 01:58:54.500764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:54.432 [2024-04-24 01:58:54.500873] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:54.432 [2024-04-24 01:58:54.500894] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:54.432 [2024-04-24 01:58:54.501017] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:54.432 [2024-04-24 01:58:54.501026] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:28:54.432 [2024-04-24 01:58:54.501136] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:54.432 [2024-04-24 01:58:54.501430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:54.432 [2024-04-24 01:58:54.501441] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:28:54.432 [2024-04-24 01:58:54.501568] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:54.433 pt4 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.692 01:58:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.949 01:58:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:54.949 "name": "raid_bdev1", 00:28:54.949 "uuid": "b2502758-f577-445c-8747-5c64f394348e", 00:28:54.949 "strip_size_kb": 64, 00:28:54.949 "state": "online", 00:28:54.949 "raid_level": "raid0", 00:28:54.949 "superblock": true, 00:28:54.949 "num_base_bdevs": 4, 00:28:54.949 "num_base_bdevs_discovered": 4, 00:28:54.949 "num_base_bdevs_operational": 4, 00:28:54.949 "base_bdevs_list": [ 00:28:54.949 { 00:28:54.949 "name": "pt1", 00:28:54.949 "uuid": "d580cbc8-f53b-5cbc-94f2-aa947488c904", 00:28:54.949 "is_configured": true, 00:28:54.949 "data_offset": 2048, 00:28:54.949 "data_size": 63488 00:28:54.949 }, 00:28:54.949 { 00:28:54.949 "name": "pt2", 00:28:54.949 "uuid": "ecb9b741-699b-57b0-bf5b-3812a51e6376", 00:28:54.949 "is_configured": true, 00:28:54.950 "data_offset": 2048, 00:28:54.950 "data_size": 63488 00:28:54.950 }, 00:28:54.950 { 00:28:54.950 "name": "pt3", 00:28:54.950 "uuid": "d696899e-dd58-572b-8bd3-a5dcdceb8db6", 00:28:54.950 "is_configured": true, 00:28:54.950 "data_offset": 2048, 00:28:54.950 "data_size": 63488 00:28:54.950 }, 00:28:54.950 { 00:28:54.950 "name": "pt4", 00:28:54.950 "uuid": "ce280504-66fe-5af9-b895-0c0d60a8067c", 00:28:54.950 "is_configured": true, 00:28:54.950 "data_offset": 2048, 00:28:54.950 "data_size": 63488 00:28:54.950 } 00:28:54.950 ] 00:28:54.950 }' 00:28:54.950 01:58:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:54.950 01:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:55.519 01:58:55 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:55.519 01:58:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:28:55.519 [2024-04-24 01:58:55.600648] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:55.778 01:58:55 -- bdev/bdev_raid.sh@430 -- # '[' b2502758-f577-445c-8747-5c64f394348e '!=' b2502758-f577-445c-8747-5c64f394348e ']' 00:28:55.778 01:58:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:28:55.778 01:58:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:55.778 01:58:55 -- bdev/bdev_raid.sh@197 -- # return 1 00:28:55.778 01:58:55 -- bdev/bdev_raid.sh@511 -- # killprocess 127864 00:28:55.778 01:58:55 -- common/autotest_common.sh@936 -- # '[' -z 127864 ']' 00:28:55.778 01:58:55 -- common/autotest_common.sh@940 -- # kill -0 127864 00:28:55.778 01:58:55 -- common/autotest_common.sh@941 -- # uname 00:28:55.778 01:58:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:55.778 01:58:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127864 00:28:55.778 01:58:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:55.778 01:58:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:55.778 01:58:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127864' 00:28:55.778 killing process with pid 127864 00:28:55.778 01:58:55 -- common/autotest_common.sh@955 -- # kill 127864 00:28:55.778 [2024-04-24 01:58:55.649605] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:55.778 01:58:55 -- common/autotest_common.sh@960 -- # wait 127864 00:28:55.778 [2024-04-24 01:58:55.649674] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:55.778 [2024-04-24 01:58:55.649735] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:55.778 [2024-04-24 01:58:55.649749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:28:56.038 [2024-04-24 01:58:56.084758] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:57.413 01:58:57 -- bdev/bdev_raid.sh@513 -- # return 0 00:28:57.413 00:28:57.413 real 0m12.897s 00:28:57.413 user 0m21.713s 00:28:57.413 sys 0m1.785s 00:28:57.413 ************************************ 00:28:57.414 END TEST raid_superblock_test 00:28:57.414 ************************************ 00:28:57.414 01:58:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:57.414 01:58:57 -- common/autotest_common.sh@10 -- # set +x 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:28:57.673 01:58:57 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:28:57.673 01:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:57.673 01:58:57 -- common/autotest_common.sh@10 -- # set +x 00:28:57.673 ************************************ 00:28:57.673 START TEST raid_state_function_test 00:28:57.673 ************************************ 00:28:57.673 01:58:57 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 false 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=128206 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:57.673 Process raid pid: 128206 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128206' 00:28:57.673 01:58:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128206 /var/tmp/spdk-raid.sock 00:28:57.673 01:58:57 -- common/autotest_common.sh@817 -- # '[' -z 128206 ']' 00:28:57.673 01:58:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:57.673 01:58:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:57.673 01:58:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:57.673 01:58:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:57.673 01:58:57 -- common/autotest_common.sh@10 -- # set +x 00:28:57.673 [2024-04-24 01:58:57.675381] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:28:57.673 [2024-04-24 01:58:57.675568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.932 [2024-04-24 01:58:57.859610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.191 [2024-04-24 01:58:58.154964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.449 [2024-04-24 01:58:58.405453] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:58.706 01:58:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:58.706 01:58:58 -- common/autotest_common.sh@850 -- # return 0 00:28:58.706 01:58:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:58.964 [2024-04-24 01:58:58.851717] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:58.964 [2024-04-24 01:58:58.851789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:58.964 [2024-04-24 01:58:58.851801] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:58.964 [2024-04-24 01:58:58.851826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:58.964 [2024-04-24 01:58:58.851834] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:58.964 [2024-04-24 01:58:58.851872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:58.964 [2024-04-24 01:58:58.851880] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:58.964 [2024-04-24 01:58:58.851905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:58.964 01:58:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.222 01:58:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:59.222 "name": "Existed_Raid", 00:28:59.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.222 "strip_size_kb": 64, 00:28:59.222 "state": "configuring", 00:28:59.222 "raid_level": "concat", 00:28:59.222 "superblock": false, 00:28:59.222 "num_base_bdevs": 4, 00:28:59.222 "num_base_bdevs_discovered": 0, 00:28:59.222 "num_base_bdevs_operational": 4, 00:28:59.222 "base_bdevs_list": [ 00:28:59.222 { 00:28:59.222 "name": "BaseBdev1", 00:28:59.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.222 "is_configured": false, 00:28:59.222 "data_offset": 0, 00:28:59.223 "data_size": 0 00:28:59.223 }, 00:28:59.223 { 00:28:59.223 "name": "BaseBdev2", 00:28:59.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.223 "is_configured": false, 00:28:59.223 "data_offset": 0, 00:28:59.223 "data_size": 0 00:28:59.223 }, 00:28:59.223 { 00:28:59.223 "name": "BaseBdev3", 00:28:59.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.223 "is_configured": false, 00:28:59.223 "data_offset": 0, 00:28:59.223 "data_size": 0 00:28:59.223 }, 00:28:59.223 { 00:28:59.223 "name": "BaseBdev4", 00:28:59.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.223 "is_configured": false, 00:28:59.223 "data_offset": 0, 00:28:59.223 "data_size": 0 00:28:59.223 } 00:28:59.223 ] 00:28:59.223 }' 00:28:59.223 01:58:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:59.223 01:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:59.789 01:58:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:00.047 [2024-04-24 01:59:00.091842] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:00.047 [2024-04-24 01:59:00.091884] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:29:00.047 01:59:00 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:00.612 [2024-04-24 01:59:00.395934] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:00.612 [2024-04-24 01:59:00.396005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:00.612 [2024-04-24 01:59:00.396015] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:00.612 [2024-04-24 01:59:00.396039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:00.612 [2024-04-24 01:59:00.396046] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:00.612 [2024-04-24 01:59:00.396086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:00.612 [2024-04-24 01:59:00.396092] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:00.612 [2024-04-24 01:59:00.396123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:00.612 01:59:00 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:00.871 [2024-04-24 01:59:00.707827] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:00.871 BaseBdev1 00:29:00.871 01:59:00 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:29:00.871 01:59:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:29:00.871 01:59:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:00.871 01:59:00 -- common/autotest_common.sh@887 -- # local i 00:29:00.871 01:59:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:00.871 01:59:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:00.871 01:59:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:00.871 01:59:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:01.129 [ 00:29:01.129 { 00:29:01.129 "name": "BaseBdev1", 00:29:01.129 "aliases": [ 00:29:01.129 "25490134-88f4-4172-9943-e3607073dff0" 00:29:01.129 ], 00:29:01.129 "product_name": "Malloc disk", 00:29:01.129 "block_size": 512, 00:29:01.129 "num_blocks": 65536, 00:29:01.129 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:01.129 "assigned_rate_limits": { 00:29:01.129 "rw_ios_per_sec": 0, 00:29:01.129 "rw_mbytes_per_sec": 0, 00:29:01.129 "r_mbytes_per_sec": 0, 00:29:01.129 "w_mbytes_per_sec": 0 00:29:01.129 }, 00:29:01.129 "claimed": true, 00:29:01.129 "claim_type": "exclusive_write", 00:29:01.129 "zoned": false, 00:29:01.129 "supported_io_types": { 00:29:01.129 "read": true, 00:29:01.129 "write": true, 00:29:01.129 "unmap": true, 00:29:01.129 "write_zeroes": true, 00:29:01.129 "flush": true, 00:29:01.129 "reset": true, 00:29:01.129 "compare": false, 00:29:01.129 "compare_and_write": false, 00:29:01.129 "abort": true, 00:29:01.129 "nvme_admin": false, 00:29:01.129 "nvme_io": false 00:29:01.129 }, 00:29:01.129 "memory_domains": [ 00:29:01.129 { 00:29:01.129 "dma_device_id": "system", 00:29:01.129 "dma_device_type": 1 00:29:01.129 }, 00:29:01.129 { 00:29:01.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.129 "dma_device_type": 2 00:29:01.129 } 00:29:01.129 ], 00:29:01.129 "driver_specific": {} 00:29:01.129 } 00:29:01.129 ] 00:29:01.386 01:59:01 -- common/autotest_common.sh@893 -- # return 0 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.386 01:59:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:01.644 01:59:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:01.644 "name": "Existed_Raid", 00:29:01.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.644 "strip_size_kb": 64, 00:29:01.644 "state": "configuring", 00:29:01.644 "raid_level": "concat", 00:29:01.644 "superblock": false, 00:29:01.644 "num_base_bdevs": 4, 00:29:01.644 "num_base_bdevs_discovered": 1, 00:29:01.644 "num_base_bdevs_operational": 4, 00:29:01.644 "base_bdevs_list": [ 00:29:01.644 { 00:29:01.644 "name": "BaseBdev1", 00:29:01.644 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:01.644 "is_configured": true, 00:29:01.644 "data_offset": 0, 00:29:01.644 "data_size": 65536 00:29:01.644 }, 00:29:01.644 { 00:29:01.644 "name": "BaseBdev2", 00:29:01.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.644 "is_configured": false, 00:29:01.644 "data_offset": 0, 00:29:01.644 "data_size": 0 00:29:01.644 }, 00:29:01.644 { 00:29:01.644 "name": "BaseBdev3", 00:29:01.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.644 "is_configured": false, 00:29:01.644 "data_offset": 0, 00:29:01.644 "data_size": 0 00:29:01.644 }, 00:29:01.644 { 00:29:01.644 "name": "BaseBdev4", 00:29:01.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.644 "is_configured": false, 00:29:01.644 "data_offset": 0, 00:29:01.644 "data_size": 0 00:29:01.644 } 00:29:01.644 ] 00:29:01.644 }' 00:29:01.644 01:59:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:01.644 01:59:01 -- common/autotest_common.sh@10 -- # set +x 00:29:02.224 01:59:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:02.483 [2024-04-24 01:59:02.368240] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:02.483 [2024-04-24 01:59:02.368298] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:29:02.483 01:59:02 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:29:02.483 01:59:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:02.741 [2024-04-24 01:59:02.604344] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:02.741 [2024-04-24 01:59:02.606501] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:02.741 [2024-04-24 01:59:02.606578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:02.741 [2024-04-24 01:59:02.606589] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:02.741 [2024-04-24 01:59:02.606619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:02.741 [2024-04-24 01:59:02.606627] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:02.741 [2024-04-24 01:59:02.606646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:02.741 01:59:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:02.742 01:59:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:02.742 01:59:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:02.742 01:59:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.742 01:59:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.998 01:59:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:02.998 "name": "Existed_Raid", 00:29:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.998 "strip_size_kb": 64, 00:29:02.998 "state": "configuring", 00:29:02.998 "raid_level": "concat", 00:29:02.998 "superblock": false, 00:29:02.998 "num_base_bdevs": 4, 00:29:02.998 "num_base_bdevs_discovered": 1, 00:29:02.998 "num_base_bdevs_operational": 4, 00:29:02.998 "base_bdevs_list": [ 00:29:02.998 { 00:29:02.998 "name": "BaseBdev1", 00:29:02.998 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:02.998 "is_configured": true, 00:29:02.998 "data_offset": 0, 00:29:02.998 "data_size": 65536 00:29:02.998 }, 00:29:02.998 { 00:29:02.998 "name": "BaseBdev2", 00:29:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.998 "is_configured": false, 00:29:02.998 "data_offset": 0, 00:29:02.998 "data_size": 0 00:29:02.998 }, 00:29:02.998 { 00:29:02.998 "name": "BaseBdev3", 00:29:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.998 "is_configured": false, 00:29:02.998 "data_offset": 0, 00:29:02.998 "data_size": 0 00:29:02.998 }, 00:29:02.998 { 00:29:02.998 "name": "BaseBdev4", 00:29:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.998 "is_configured": false, 00:29:02.998 "data_offset": 0, 00:29:02.998 "data_size": 0 00:29:02.998 } 00:29:02.998 ] 00:29:02.998 }' 00:29:02.998 01:59:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:02.998 01:59:02 -- common/autotest_common.sh@10 -- # set +x 00:29:03.565 01:59:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:29:03.822 [2024-04-24 01:59:03.794338] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:03.822 BaseBdev2 00:29:03.822 01:59:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:29:03.822 01:59:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:29:03.822 01:59:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:03.822 01:59:03 -- common/autotest_common.sh@887 -- # local i 00:29:03.822 01:59:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:03.822 01:59:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:03.822 01:59:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:04.079 01:59:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:04.337 [ 00:29:04.337 { 00:29:04.337 "name": "BaseBdev2", 00:29:04.337 "aliases": [ 00:29:04.337 "114c554e-ba0a-4a84-8f8b-24ae3b701e3d" 00:29:04.337 ], 00:29:04.337 "product_name": "Malloc disk", 00:29:04.337 "block_size": 512, 00:29:04.337 "num_blocks": 65536, 00:29:04.337 "uuid": "114c554e-ba0a-4a84-8f8b-24ae3b701e3d", 00:29:04.337 "assigned_rate_limits": { 00:29:04.337 "rw_ios_per_sec": 0, 00:29:04.337 "rw_mbytes_per_sec": 0, 00:29:04.337 "r_mbytes_per_sec": 0, 00:29:04.337 "w_mbytes_per_sec": 0 00:29:04.337 }, 00:29:04.337 "claimed": true, 00:29:04.337 "claim_type": "exclusive_write", 00:29:04.337 "zoned": false, 00:29:04.337 "supported_io_types": { 00:29:04.337 "read": true, 00:29:04.337 "write": true, 00:29:04.337 "unmap": true, 00:29:04.337 "write_zeroes": true, 00:29:04.337 "flush": true, 00:29:04.337 "reset": true, 00:29:04.337 "compare": false, 00:29:04.337 "compare_and_write": false, 00:29:04.337 "abort": true, 00:29:04.337 "nvme_admin": false, 00:29:04.337 "nvme_io": false 00:29:04.337 }, 00:29:04.337 "memory_domains": [ 00:29:04.337 { 00:29:04.337 "dma_device_id": "system", 00:29:04.337 "dma_device_type": 1 00:29:04.337 }, 00:29:04.337 { 00:29:04.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.337 "dma_device_type": 2 00:29:04.337 } 00:29:04.337 ], 00:29:04.337 "driver_specific": {} 00:29:04.337 } 00:29:04.337 ] 00:29:04.337 01:59:04 -- common/autotest_common.sh@893 -- # return 0 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:04.337 01:59:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.594 01:59:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:04.594 "name": "Existed_Raid", 00:29:04.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.594 "strip_size_kb": 64, 00:29:04.594 "state": "configuring", 00:29:04.594 "raid_level": "concat", 00:29:04.594 "superblock": false, 00:29:04.594 "num_base_bdevs": 4, 00:29:04.594 "num_base_bdevs_discovered": 2, 00:29:04.594 "num_base_bdevs_operational": 4, 00:29:04.594 "base_bdevs_list": [ 00:29:04.594 { 00:29:04.594 "name": "BaseBdev1", 00:29:04.594 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:04.594 "is_configured": true, 00:29:04.594 "data_offset": 0, 00:29:04.594 "data_size": 65536 00:29:04.594 }, 00:29:04.594 { 00:29:04.594 "name": "BaseBdev2", 00:29:04.594 "uuid": "114c554e-ba0a-4a84-8f8b-24ae3b701e3d", 00:29:04.594 "is_configured": true, 00:29:04.594 "data_offset": 0, 00:29:04.594 "data_size": 65536 00:29:04.594 }, 00:29:04.594 { 00:29:04.594 "name": "BaseBdev3", 00:29:04.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.594 "is_configured": false, 00:29:04.594 "data_offset": 0, 00:29:04.594 "data_size": 0 00:29:04.594 }, 00:29:04.594 { 00:29:04.594 "name": "BaseBdev4", 00:29:04.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.594 "is_configured": false, 00:29:04.594 "data_offset": 0, 00:29:04.594 "data_size": 0 00:29:04.594 } 00:29:04.594 ] 00:29:04.594 }' 00:29:04.594 01:59:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:04.594 01:59:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.159 01:59:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:29:05.462 [2024-04-24 01:59:05.452324] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:05.462 BaseBdev3 00:29:05.462 01:59:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:29:05.462 01:59:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:29:05.462 01:59:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:05.462 01:59:05 -- common/autotest_common.sh@887 -- # local i 00:29:05.462 01:59:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:05.462 01:59:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:05.462 01:59:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:05.720 01:59:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:05.977 [ 00:29:05.977 { 00:29:05.977 "name": "BaseBdev3", 00:29:05.977 "aliases": [ 00:29:05.977 "33341d57-04f6-4ff6-9f0e-aba22c795eb3" 00:29:05.977 ], 00:29:05.977 "product_name": "Malloc disk", 00:29:05.977 "block_size": 512, 00:29:05.977 "num_blocks": 65536, 00:29:05.978 "uuid": "33341d57-04f6-4ff6-9f0e-aba22c795eb3", 00:29:05.978 "assigned_rate_limits": { 00:29:05.978 "rw_ios_per_sec": 0, 00:29:05.978 "rw_mbytes_per_sec": 0, 00:29:05.978 "r_mbytes_per_sec": 0, 00:29:05.978 "w_mbytes_per_sec": 0 00:29:05.978 }, 00:29:05.978 "claimed": true, 00:29:05.978 "claim_type": "exclusive_write", 00:29:05.978 "zoned": false, 00:29:05.978 "supported_io_types": { 00:29:05.978 "read": true, 00:29:05.978 "write": true, 00:29:05.978 "unmap": true, 00:29:05.978 "write_zeroes": true, 00:29:05.978 "flush": true, 00:29:05.978 "reset": true, 00:29:05.978 "compare": false, 00:29:05.978 "compare_and_write": false, 00:29:05.978 "abort": true, 00:29:05.978 "nvme_admin": false, 00:29:05.978 "nvme_io": false 00:29:05.978 }, 00:29:05.978 "memory_domains": [ 00:29:05.978 { 00:29:05.978 "dma_device_id": "system", 00:29:05.978 "dma_device_type": 1 00:29:05.978 }, 00:29:05.978 { 00:29:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:05.978 "dma_device_type": 2 00:29:05.978 } 00:29:05.978 ], 00:29:05.978 "driver_specific": {} 00:29:05.978 } 00:29:05.978 ] 00:29:05.978 01:59:05 -- common/autotest_common.sh@893 -- # return 0 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.978 01:59:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:06.236 01:59:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:06.236 "name": "Existed_Raid", 00:29:06.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.236 "strip_size_kb": 64, 00:29:06.236 "state": "configuring", 00:29:06.236 "raid_level": "concat", 00:29:06.236 "superblock": false, 00:29:06.236 "num_base_bdevs": 4, 00:29:06.236 "num_base_bdevs_discovered": 3, 00:29:06.236 "num_base_bdevs_operational": 4, 00:29:06.236 "base_bdevs_list": [ 00:29:06.236 { 00:29:06.236 "name": "BaseBdev1", 00:29:06.236 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:06.236 "is_configured": true, 00:29:06.237 "data_offset": 0, 00:29:06.237 "data_size": 65536 00:29:06.237 }, 00:29:06.237 { 00:29:06.237 "name": "BaseBdev2", 00:29:06.237 "uuid": "114c554e-ba0a-4a84-8f8b-24ae3b701e3d", 00:29:06.237 "is_configured": true, 00:29:06.237 "data_offset": 0, 00:29:06.237 "data_size": 65536 00:29:06.237 }, 00:29:06.237 { 00:29:06.237 "name": "BaseBdev3", 00:29:06.237 "uuid": "33341d57-04f6-4ff6-9f0e-aba22c795eb3", 00:29:06.237 "is_configured": true, 00:29:06.237 "data_offset": 0, 00:29:06.237 "data_size": 65536 00:29:06.237 }, 00:29:06.237 { 00:29:06.237 "name": "BaseBdev4", 00:29:06.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.237 "is_configured": false, 00:29:06.237 "data_offset": 0, 00:29:06.237 "data_size": 0 00:29:06.237 } 00:29:06.237 ] 00:29:06.237 }' 00:29:06.237 01:59:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:06.237 01:59:06 -- common/autotest_common.sh@10 -- # set +x 00:29:06.802 01:59:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:29:07.366 [2024-04-24 01:59:07.151281] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:07.366 [2024-04-24 01:59:07.151335] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:29:07.366 [2024-04-24 01:59:07.151344] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:29:07.366 [2024-04-24 01:59:07.151490] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:29:07.366 [2024-04-24 01:59:07.151822] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:29:07.366 [2024-04-24 01:59:07.151839] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:29:07.366 [2024-04-24 01:59:07.152056] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.366 BaseBdev4 00:29:07.366 01:59:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:29:07.366 01:59:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:29:07.366 01:59:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:07.366 01:59:07 -- common/autotest_common.sh@887 -- # local i 00:29:07.366 01:59:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:07.366 01:59:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:07.366 01:59:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:07.366 01:59:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:07.624 [ 00:29:07.624 { 00:29:07.624 "name": "BaseBdev4", 00:29:07.624 "aliases": [ 00:29:07.624 "a53725d8-ce5d-4c0c-b6b9-29a4e9029dfd" 00:29:07.624 ], 00:29:07.624 "product_name": "Malloc disk", 00:29:07.624 "block_size": 512, 00:29:07.624 "num_blocks": 65536, 00:29:07.624 "uuid": "a53725d8-ce5d-4c0c-b6b9-29a4e9029dfd", 00:29:07.624 "assigned_rate_limits": { 00:29:07.624 "rw_ios_per_sec": 0, 00:29:07.624 "rw_mbytes_per_sec": 0, 00:29:07.624 "r_mbytes_per_sec": 0, 00:29:07.624 "w_mbytes_per_sec": 0 00:29:07.624 }, 00:29:07.624 "claimed": true, 00:29:07.624 "claim_type": "exclusive_write", 00:29:07.624 "zoned": false, 00:29:07.624 "supported_io_types": { 00:29:07.624 "read": true, 00:29:07.624 "write": true, 00:29:07.624 "unmap": true, 00:29:07.624 "write_zeroes": true, 00:29:07.624 "flush": true, 00:29:07.624 "reset": true, 00:29:07.624 "compare": false, 00:29:07.624 "compare_and_write": false, 00:29:07.624 "abort": true, 00:29:07.624 "nvme_admin": false, 00:29:07.624 "nvme_io": false 00:29:07.624 }, 00:29:07.624 "memory_domains": [ 00:29:07.624 { 00:29:07.624 "dma_device_id": "system", 00:29:07.624 "dma_device_type": 1 00:29:07.624 }, 00:29:07.624 { 00:29:07.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.624 "dma_device_type": 2 00:29:07.624 } 00:29:07.624 ], 00:29:07.624 "driver_specific": {} 00:29:07.624 } 00:29:07.624 ] 00:29:07.624 01:59:07 -- common/autotest_common.sh@893 -- # return 0 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.624 01:59:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:07.882 01:59:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:07.883 "name": "Existed_Raid", 00:29:07.883 "uuid": "8f9d6554-ef1e-4afc-ba68-82b78dc22039", 00:29:07.883 "strip_size_kb": 64, 00:29:07.883 "state": "online", 00:29:07.883 "raid_level": "concat", 00:29:07.883 "superblock": false, 00:29:07.883 "num_base_bdevs": 4, 00:29:07.883 "num_base_bdevs_discovered": 4, 00:29:07.883 "num_base_bdevs_operational": 4, 00:29:07.883 "base_bdevs_list": [ 00:29:07.883 { 00:29:07.883 "name": "BaseBdev1", 00:29:07.883 "uuid": "25490134-88f4-4172-9943-e3607073dff0", 00:29:07.883 "is_configured": true, 00:29:07.883 "data_offset": 0, 00:29:07.883 "data_size": 65536 00:29:07.883 }, 00:29:07.883 { 00:29:07.883 "name": "BaseBdev2", 00:29:07.883 "uuid": "114c554e-ba0a-4a84-8f8b-24ae3b701e3d", 00:29:07.883 "is_configured": true, 00:29:07.883 "data_offset": 0, 00:29:07.883 "data_size": 65536 00:29:07.883 }, 00:29:07.883 { 00:29:07.883 "name": "BaseBdev3", 00:29:07.883 "uuid": "33341d57-04f6-4ff6-9f0e-aba22c795eb3", 00:29:07.883 "is_configured": true, 00:29:07.883 "data_offset": 0, 00:29:07.883 "data_size": 65536 00:29:07.883 }, 00:29:07.883 { 00:29:07.883 "name": "BaseBdev4", 00:29:07.883 "uuid": "a53725d8-ce5d-4c0c-b6b9-29a4e9029dfd", 00:29:07.883 "is_configured": true, 00:29:07.883 "data_offset": 0, 00:29:07.883 "data_size": 65536 00:29:07.883 } 00:29:07.883 ] 00:29:07.883 }' 00:29:07.883 01:59:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:07.883 01:59:07 -- common/autotest_common.sh@10 -- # set +x 00:29:08.448 01:59:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:08.706 [2024-04-24 01:59:08.712734] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:08.706 [2024-04-24 01:59:08.712778] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:08.706 [2024-04-24 01:59:08.712829] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.963 01:59:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.220 01:59:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:09.220 "name": "Existed_Raid", 00:29:09.220 "uuid": "8f9d6554-ef1e-4afc-ba68-82b78dc22039", 00:29:09.220 "strip_size_kb": 64, 00:29:09.220 "state": "offline", 00:29:09.220 "raid_level": "concat", 00:29:09.220 "superblock": false, 00:29:09.220 "num_base_bdevs": 4, 00:29:09.220 "num_base_bdevs_discovered": 3, 00:29:09.220 "num_base_bdevs_operational": 3, 00:29:09.220 "base_bdevs_list": [ 00:29:09.220 { 00:29:09.220 "name": null, 00:29:09.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.220 "is_configured": false, 00:29:09.220 "data_offset": 0, 00:29:09.220 "data_size": 65536 00:29:09.220 }, 00:29:09.220 { 00:29:09.220 "name": "BaseBdev2", 00:29:09.220 "uuid": "114c554e-ba0a-4a84-8f8b-24ae3b701e3d", 00:29:09.220 "is_configured": true, 00:29:09.220 "data_offset": 0, 00:29:09.220 "data_size": 65536 00:29:09.220 }, 00:29:09.220 { 00:29:09.220 "name": "BaseBdev3", 00:29:09.220 "uuid": "33341d57-04f6-4ff6-9f0e-aba22c795eb3", 00:29:09.220 "is_configured": true, 00:29:09.220 "data_offset": 0, 00:29:09.220 "data_size": 65536 00:29:09.220 }, 00:29:09.220 { 00:29:09.220 "name": "BaseBdev4", 00:29:09.220 "uuid": "a53725d8-ce5d-4c0c-b6b9-29a4e9029dfd", 00:29:09.220 "is_configured": true, 00:29:09.220 "data_offset": 0, 00:29:09.220 "data_size": 65536 00:29:09.220 } 00:29:09.220 ] 00:29:09.220 }' 00:29:09.220 01:59:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:09.220 01:59:09 -- common/autotest_common.sh@10 -- # set +x 00:29:09.786 01:59:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:29:09.786 01:59:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:09.786 01:59:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:09.786 01:59:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.043 01:59:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:10.043 01:59:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:10.043 01:59:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:10.300 [2024-04-24 01:59:10.240652] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:10.300 01:59:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:10.300 01:59:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:10.300 01:59:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.300 01:59:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:10.559 01:59:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:10.559 01:59:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:10.559 01:59:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:29:10.817 [2024-04-24 01:59:10.812678] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:11.075 01:59:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:11.075 01:59:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:11.075 01:59:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:11.075 01:59:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.333 01:59:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:11.333 01:59:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:11.333 01:59:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:29:11.660 [2024-04-24 01:59:11.463605] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:11.660 [2024-04-24 01:59:11.463670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:29:11.660 01:59:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:11.660 01:59:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:11.660 01:59:11 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:29:11.660 01:59:11 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.937 01:59:11 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:29:11.937 01:59:11 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:29:11.937 01:59:11 -- bdev/bdev_raid.sh@287 -- # killprocess 128206 00:29:11.937 01:59:11 -- common/autotest_common.sh@936 -- # '[' -z 128206 ']' 00:29:11.937 01:59:11 -- common/autotest_common.sh@940 -- # kill -0 128206 00:29:11.937 01:59:11 -- common/autotest_common.sh@941 -- # uname 00:29:11.937 01:59:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:11.937 01:59:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128206 00:29:11.937 01:59:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:11.937 01:59:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:11.937 01:59:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128206' 00:29:11.937 killing process with pid 128206 00:29:11.937 01:59:11 -- common/autotest_common.sh@955 -- # kill 128206 00:29:11.937 [2024-04-24 01:59:11.913966] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:11.937 [2024-04-24 01:59:11.914133] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.937 01:59:11 -- common/autotest_common.sh@960 -- # wait 128206 00:29:13.317 01:59:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:29:13.317 00:29:13.317 real 0m15.765s 00:29:13.317 user 0m27.255s 00:29:13.317 sys 0m2.092s 00:29:13.317 01:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:13.317 ************************************ 00:29:13.317 END TEST raid_state_function_test 00:29:13.317 01:59:13 -- common/autotest_common.sh@10 -- # set +x 00:29:13.317 ************************************ 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:29:13.576 01:59:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:13.576 01:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.576 01:59:13 -- common/autotest_common.sh@10 -- # set +x 00:29:13.576 ************************************ 00:29:13.576 START TEST raid_state_function_test_sb 00:29:13.576 ************************************ 00:29:13.576 01:59:13 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 true 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:13.576 01:59:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:13.577 Process raid pid: 128671 00:29:13.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=128671 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128671' 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128671 /var/tmp/spdk-raid.sock 00:29:13.577 01:59:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:13.577 01:59:13 -- common/autotest_common.sh@817 -- # '[' -z 128671 ']' 00:29:13.577 01:59:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:13.577 01:59:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:13.577 01:59:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:13.577 01:59:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:13.577 01:59:13 -- common/autotest_common.sh@10 -- # set +x 00:29:13.577 [2024-04-24 01:59:13.549573] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:29:13.577 [2024-04-24 01:59:13.550064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.835 [2024-04-24 01:59:13.737242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.118 [2024-04-24 01:59:14.026082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.379 [2024-04-24 01:59:14.301647] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:14.640 01:59:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:14.640 01:59:14 -- common/autotest_common.sh@850 -- # return 0 00:29:14.641 01:59:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:14.899 [2024-04-24 01:59:14.808076] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:14.899 [2024-04-24 01:59:14.808404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:14.899 [2024-04-24 01:59:14.808497] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:14.899 [2024-04-24 01:59:14.808559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:14.899 [2024-04-24 01:59:14.808642] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:14.899 [2024-04-24 01:59:14.808712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:14.899 [2024-04-24 01:59:14.808742] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:14.899 [2024-04-24 01:59:14.808878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.899 01:59:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:15.158 01:59:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:15.158 "name": "Existed_Raid", 00:29:15.158 "uuid": "7bcc34c5-47b4-4644-87ce-fa99919eecad", 00:29:15.158 "strip_size_kb": 64, 00:29:15.158 "state": "configuring", 00:29:15.158 "raid_level": "concat", 00:29:15.158 "superblock": true, 00:29:15.158 "num_base_bdevs": 4, 00:29:15.158 "num_base_bdevs_discovered": 0, 00:29:15.158 "num_base_bdevs_operational": 4, 00:29:15.158 "base_bdevs_list": [ 00:29:15.158 { 00:29:15.158 "name": "BaseBdev1", 00:29:15.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.158 "is_configured": false, 00:29:15.158 "data_offset": 0, 00:29:15.158 "data_size": 0 00:29:15.158 }, 00:29:15.158 { 00:29:15.158 "name": "BaseBdev2", 00:29:15.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.158 "is_configured": false, 00:29:15.158 "data_offset": 0, 00:29:15.158 "data_size": 0 00:29:15.158 }, 00:29:15.158 { 00:29:15.158 "name": "BaseBdev3", 00:29:15.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.158 "is_configured": false, 00:29:15.158 "data_offset": 0, 00:29:15.158 "data_size": 0 00:29:15.158 }, 00:29:15.158 { 00:29:15.158 "name": "BaseBdev4", 00:29:15.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.158 "is_configured": false, 00:29:15.158 "data_offset": 0, 00:29:15.158 "data_size": 0 00:29:15.158 } 00:29:15.158 ] 00:29:15.158 }' 00:29:15.158 01:59:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:15.158 01:59:15 -- common/autotest_common.sh@10 -- # set +x 00:29:15.725 01:59:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:15.984 [2024-04-24 01:59:15.952147] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:15.984 [2024-04-24 01:59:15.952436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:29:15.984 01:59:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:16.242 [2024-04-24 01:59:16.232276] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:16.242 [2024-04-24 01:59:16.232619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:16.242 [2024-04-24 01:59:16.232713] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:16.242 [2024-04-24 01:59:16.232827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:16.242 [2024-04-24 01:59:16.232908] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:16.242 [2024-04-24 01:59:16.233026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:16.242 [2024-04-24 01:59:16.233101] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:16.242 [2024-04-24 01:59:16.233206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:16.242 01:59:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:16.505 [2024-04-24 01:59:16.552003] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.505 BaseBdev1 00:29:16.505 01:59:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:29:16.505 01:59:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:29:16.505 01:59:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:16.505 01:59:16 -- common/autotest_common.sh@887 -- # local i 00:29:16.505 01:59:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:16.505 01:59:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:16.505 01:59:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:17.072 01:59:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:17.072 [ 00:29:17.072 { 00:29:17.072 "name": "BaseBdev1", 00:29:17.072 "aliases": [ 00:29:17.072 "6265d1ed-aa5a-4610-8c40-08f48bccc002" 00:29:17.072 ], 00:29:17.072 "product_name": "Malloc disk", 00:29:17.072 "block_size": 512, 00:29:17.072 "num_blocks": 65536, 00:29:17.072 "uuid": "6265d1ed-aa5a-4610-8c40-08f48bccc002", 00:29:17.072 "assigned_rate_limits": { 00:29:17.072 "rw_ios_per_sec": 0, 00:29:17.072 "rw_mbytes_per_sec": 0, 00:29:17.072 "r_mbytes_per_sec": 0, 00:29:17.072 "w_mbytes_per_sec": 0 00:29:17.072 }, 00:29:17.072 "claimed": true, 00:29:17.072 "claim_type": "exclusive_write", 00:29:17.072 "zoned": false, 00:29:17.072 "supported_io_types": { 00:29:17.072 "read": true, 00:29:17.072 "write": true, 00:29:17.072 "unmap": true, 00:29:17.072 "write_zeroes": true, 00:29:17.072 "flush": true, 00:29:17.072 "reset": true, 00:29:17.072 "compare": false, 00:29:17.072 "compare_and_write": false, 00:29:17.072 "abort": true, 00:29:17.072 "nvme_admin": false, 00:29:17.072 "nvme_io": false 00:29:17.072 }, 00:29:17.072 "memory_domains": [ 00:29:17.072 { 00:29:17.072 "dma_device_id": "system", 00:29:17.072 "dma_device_type": 1 00:29:17.072 }, 00:29:17.072 { 00:29:17.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:17.072 "dma_device_type": 2 00:29:17.072 } 00:29:17.072 ], 00:29:17.072 "driver_specific": {} 00:29:17.072 } 00:29:17.072 ] 00:29:17.072 01:59:17 -- common/autotest_common.sh@893 -- # return 0 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:17.072 01:59:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:17.073 01:59:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:17.073 01:59:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:17.073 01:59:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:17.073 01:59:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.073 01:59:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.641 01:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:17.641 "name": "Existed_Raid", 00:29:17.641 "uuid": "08a7b96c-f1cb-438f-913e-d91bb1893ed0", 00:29:17.641 "strip_size_kb": 64, 00:29:17.641 "state": "configuring", 00:29:17.641 "raid_level": "concat", 00:29:17.641 "superblock": true, 00:29:17.641 "num_base_bdevs": 4, 00:29:17.641 "num_base_bdevs_discovered": 1, 00:29:17.641 "num_base_bdevs_operational": 4, 00:29:17.641 "base_bdevs_list": [ 00:29:17.641 { 00:29:17.641 "name": "BaseBdev1", 00:29:17.641 "uuid": "6265d1ed-aa5a-4610-8c40-08f48bccc002", 00:29:17.641 "is_configured": true, 00:29:17.641 "data_offset": 2048, 00:29:17.641 "data_size": 63488 00:29:17.641 }, 00:29:17.641 { 00:29:17.641 "name": "BaseBdev2", 00:29:17.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.641 "is_configured": false, 00:29:17.641 "data_offset": 0, 00:29:17.641 "data_size": 0 00:29:17.641 }, 00:29:17.641 { 00:29:17.641 "name": "BaseBdev3", 00:29:17.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.641 "is_configured": false, 00:29:17.641 "data_offset": 0, 00:29:17.641 "data_size": 0 00:29:17.641 }, 00:29:17.641 { 00:29:17.641 "name": "BaseBdev4", 00:29:17.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.641 "is_configured": false, 00:29:17.641 "data_offset": 0, 00:29:17.641 "data_size": 0 00:29:17.641 } 00:29:17.641 ] 00:29:17.641 }' 00:29:17.641 01:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:17.641 01:59:17 -- common/autotest_common.sh@10 -- # set +x 00:29:18.208 01:59:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:18.466 [2024-04-24 01:59:18.328483] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:18.466 [2024-04-24 01:59:18.328741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:29:18.466 01:59:18 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:29:18.466 01:59:18 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:18.724 01:59:18 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:18.982 BaseBdev1 00:29:18.982 01:59:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:29:18.982 01:59:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:29:18.982 01:59:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:18.982 01:59:18 -- common/autotest_common.sh@887 -- # local i 00:29:18.982 01:59:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:18.982 01:59:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:18.982 01:59:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:19.240 01:59:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:19.807 [ 00:29:19.807 { 00:29:19.807 "name": "BaseBdev1", 00:29:19.807 "aliases": [ 00:29:19.807 "0ddd574b-b2f0-4f6a-a2de-0e27413568e8" 00:29:19.807 ], 00:29:19.807 "product_name": "Malloc disk", 00:29:19.807 "block_size": 512, 00:29:19.807 "num_blocks": 65536, 00:29:19.807 "uuid": "0ddd574b-b2f0-4f6a-a2de-0e27413568e8", 00:29:19.807 "assigned_rate_limits": { 00:29:19.807 "rw_ios_per_sec": 0, 00:29:19.807 "rw_mbytes_per_sec": 0, 00:29:19.807 "r_mbytes_per_sec": 0, 00:29:19.807 "w_mbytes_per_sec": 0 00:29:19.807 }, 00:29:19.807 "claimed": false, 00:29:19.807 "zoned": false, 00:29:19.807 "supported_io_types": { 00:29:19.807 "read": true, 00:29:19.807 "write": true, 00:29:19.807 "unmap": true, 00:29:19.807 "write_zeroes": true, 00:29:19.807 "flush": true, 00:29:19.807 "reset": true, 00:29:19.807 "compare": false, 00:29:19.807 "compare_and_write": false, 00:29:19.807 "abort": true, 00:29:19.807 "nvme_admin": false, 00:29:19.807 "nvme_io": false 00:29:19.807 }, 00:29:19.807 "memory_domains": [ 00:29:19.807 { 00:29:19.807 "dma_device_id": "system", 00:29:19.807 "dma_device_type": 1 00:29:19.807 }, 00:29:19.807 { 00:29:19.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.807 "dma_device_type": 2 00:29:19.807 } 00:29:19.807 ], 00:29:19.807 "driver_specific": {} 00:29:19.807 } 00:29:19.807 ] 00:29:19.807 01:59:19 -- common/autotest_common.sh@893 -- # return 0 00:29:19.807 01:59:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:19.807 [2024-04-24 01:59:19.879761] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:19.807 [2024-04-24 01:59:19.882060] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:19.807 [2024-04-24 01:59:19.882273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:19.807 [2024-04-24 01:59:19.882369] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:19.807 [2024-04-24 01:59:19.882431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:19.807 [2024-04-24 01:59:19.882508] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:19.807 [2024-04-24 01:59:19.882563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:20.065 01:59:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.322 01:59:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:20.322 "name": "Existed_Raid", 00:29:20.322 "uuid": "5d51a095-2af3-41bc-9d08-38c256779496", 00:29:20.322 "strip_size_kb": 64, 00:29:20.322 "state": "configuring", 00:29:20.322 "raid_level": "concat", 00:29:20.322 "superblock": true, 00:29:20.322 "num_base_bdevs": 4, 00:29:20.322 "num_base_bdevs_discovered": 1, 00:29:20.322 "num_base_bdevs_operational": 4, 00:29:20.322 "base_bdevs_list": [ 00:29:20.322 { 00:29:20.322 "name": "BaseBdev1", 00:29:20.322 "uuid": "0ddd574b-b2f0-4f6a-a2de-0e27413568e8", 00:29:20.322 "is_configured": true, 00:29:20.322 "data_offset": 2048, 00:29:20.322 "data_size": 63488 00:29:20.322 }, 00:29:20.322 { 00:29:20.322 "name": "BaseBdev2", 00:29:20.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.322 "is_configured": false, 00:29:20.322 "data_offset": 0, 00:29:20.322 "data_size": 0 00:29:20.322 }, 00:29:20.322 { 00:29:20.322 "name": "BaseBdev3", 00:29:20.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.322 "is_configured": false, 00:29:20.323 "data_offset": 0, 00:29:20.323 "data_size": 0 00:29:20.323 }, 00:29:20.323 { 00:29:20.323 "name": "BaseBdev4", 00:29:20.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.323 "is_configured": false, 00:29:20.323 "data_offset": 0, 00:29:20.323 "data_size": 0 00:29:20.323 } 00:29:20.323 ] 00:29:20.323 }' 00:29:20.323 01:59:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:20.323 01:59:20 -- common/autotest_common.sh@10 -- # set +x 00:29:20.890 01:59:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:29:21.148 [2024-04-24 01:59:21.222791] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:21.148 BaseBdev2 00:29:21.406 01:59:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:29:21.406 01:59:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:29:21.406 01:59:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:21.407 01:59:21 -- common/autotest_common.sh@887 -- # local i 00:29:21.407 01:59:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:21.407 01:59:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:21.407 01:59:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:21.665 01:59:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:21.665 [ 00:29:21.665 { 00:29:21.665 "name": "BaseBdev2", 00:29:21.665 "aliases": [ 00:29:21.665 "9897d9e1-1324-493f-a916-028c605cf186" 00:29:21.665 ], 00:29:21.665 "product_name": "Malloc disk", 00:29:21.665 "block_size": 512, 00:29:21.665 "num_blocks": 65536, 00:29:21.665 "uuid": "9897d9e1-1324-493f-a916-028c605cf186", 00:29:21.665 "assigned_rate_limits": { 00:29:21.665 "rw_ios_per_sec": 0, 00:29:21.665 "rw_mbytes_per_sec": 0, 00:29:21.665 "r_mbytes_per_sec": 0, 00:29:21.665 "w_mbytes_per_sec": 0 00:29:21.665 }, 00:29:21.665 "claimed": true, 00:29:21.665 "claim_type": "exclusive_write", 00:29:21.665 "zoned": false, 00:29:21.665 "supported_io_types": { 00:29:21.665 "read": true, 00:29:21.665 "write": true, 00:29:21.665 "unmap": true, 00:29:21.665 "write_zeroes": true, 00:29:21.665 "flush": true, 00:29:21.665 "reset": true, 00:29:21.665 "compare": false, 00:29:21.665 "compare_and_write": false, 00:29:21.665 "abort": true, 00:29:21.665 "nvme_admin": false, 00:29:21.665 "nvme_io": false 00:29:21.665 }, 00:29:21.665 "memory_domains": [ 00:29:21.665 { 00:29:21.665 "dma_device_id": "system", 00:29:21.665 "dma_device_type": 1 00:29:21.665 }, 00:29:21.665 { 00:29:21.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:21.665 "dma_device_type": 2 00:29:21.665 } 00:29:21.665 ], 00:29:21.665 "driver_specific": {} 00:29:21.665 } 00:29:21.665 ] 00:29:21.665 01:59:21 -- common/autotest_common.sh@893 -- # return 0 00:29:21.665 01:59:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.924 01:59:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.182 01:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:22.182 "name": "Existed_Raid", 00:29:22.182 "uuid": "5d51a095-2af3-41bc-9d08-38c256779496", 00:29:22.182 "strip_size_kb": 64, 00:29:22.182 "state": "configuring", 00:29:22.182 "raid_level": "concat", 00:29:22.182 "superblock": true, 00:29:22.182 "num_base_bdevs": 4, 00:29:22.182 "num_base_bdevs_discovered": 2, 00:29:22.182 "num_base_bdevs_operational": 4, 00:29:22.182 "base_bdevs_list": [ 00:29:22.182 { 00:29:22.182 "name": "BaseBdev1", 00:29:22.182 "uuid": "0ddd574b-b2f0-4f6a-a2de-0e27413568e8", 00:29:22.182 "is_configured": true, 00:29:22.182 "data_offset": 2048, 00:29:22.182 "data_size": 63488 00:29:22.182 }, 00:29:22.182 { 00:29:22.182 "name": "BaseBdev2", 00:29:22.182 "uuid": "9897d9e1-1324-493f-a916-028c605cf186", 00:29:22.182 "is_configured": true, 00:29:22.182 "data_offset": 2048, 00:29:22.182 "data_size": 63488 00:29:22.182 }, 00:29:22.182 { 00:29:22.182 "name": "BaseBdev3", 00:29:22.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.182 "is_configured": false, 00:29:22.182 "data_offset": 0, 00:29:22.183 "data_size": 0 00:29:22.183 }, 00:29:22.183 { 00:29:22.183 "name": "BaseBdev4", 00:29:22.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.183 "is_configured": false, 00:29:22.183 "data_offset": 0, 00:29:22.183 "data_size": 0 00:29:22.183 } 00:29:22.183 ] 00:29:22.183 }' 00:29:22.183 01:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:22.183 01:59:22 -- common/autotest_common.sh@10 -- # set +x 00:29:22.750 01:59:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:29:23.007 [2024-04-24 01:59:22.979084] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:23.007 BaseBdev3 00:29:23.007 01:59:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:29:23.007 01:59:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:29:23.007 01:59:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:23.007 01:59:22 -- common/autotest_common.sh@887 -- # local i 00:29:23.007 01:59:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:23.007 01:59:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:23.007 01:59:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:23.263 01:59:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:23.520 [ 00:29:23.520 { 00:29:23.520 "name": "BaseBdev3", 00:29:23.520 "aliases": [ 00:29:23.520 "83270d13-c59b-445d-8151-7dab5be32285" 00:29:23.520 ], 00:29:23.520 "product_name": "Malloc disk", 00:29:23.520 "block_size": 512, 00:29:23.520 "num_blocks": 65536, 00:29:23.520 "uuid": "83270d13-c59b-445d-8151-7dab5be32285", 00:29:23.520 "assigned_rate_limits": { 00:29:23.520 "rw_ios_per_sec": 0, 00:29:23.520 "rw_mbytes_per_sec": 0, 00:29:23.520 "r_mbytes_per_sec": 0, 00:29:23.520 "w_mbytes_per_sec": 0 00:29:23.520 }, 00:29:23.520 "claimed": true, 00:29:23.520 "claim_type": "exclusive_write", 00:29:23.520 "zoned": false, 00:29:23.520 "supported_io_types": { 00:29:23.520 "read": true, 00:29:23.520 "write": true, 00:29:23.520 "unmap": true, 00:29:23.520 "write_zeroes": true, 00:29:23.520 "flush": true, 00:29:23.520 "reset": true, 00:29:23.520 "compare": false, 00:29:23.520 "compare_and_write": false, 00:29:23.520 "abort": true, 00:29:23.520 "nvme_admin": false, 00:29:23.520 "nvme_io": false 00:29:23.520 }, 00:29:23.520 "memory_domains": [ 00:29:23.520 { 00:29:23.520 "dma_device_id": "system", 00:29:23.520 "dma_device_type": 1 00:29:23.520 }, 00:29:23.520 { 00:29:23.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.520 "dma_device_type": 2 00:29:23.520 } 00:29:23.520 ], 00:29:23.520 "driver_specific": {} 00:29:23.520 } 00:29:23.520 ] 00:29:23.520 01:59:23 -- common/autotest_common.sh@893 -- # return 0 00:29:23.520 01:59:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:23.520 01:59:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:23.520 01:59:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:23.520 01:59:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.521 01:59:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.778 01:59:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:23.778 "name": "Existed_Raid", 00:29:23.778 "uuid": "5d51a095-2af3-41bc-9d08-38c256779496", 00:29:23.778 "strip_size_kb": 64, 00:29:23.778 "state": "configuring", 00:29:23.778 "raid_level": "concat", 00:29:23.778 "superblock": true, 00:29:23.778 "num_base_bdevs": 4, 00:29:23.778 "num_base_bdevs_discovered": 3, 00:29:23.778 "num_base_bdevs_operational": 4, 00:29:23.778 "base_bdevs_list": [ 00:29:23.778 { 00:29:23.778 "name": "BaseBdev1", 00:29:23.778 "uuid": "0ddd574b-b2f0-4f6a-a2de-0e27413568e8", 00:29:23.778 "is_configured": true, 00:29:23.778 "data_offset": 2048, 00:29:23.778 "data_size": 63488 00:29:23.778 }, 00:29:23.778 { 00:29:23.778 "name": "BaseBdev2", 00:29:23.778 "uuid": "9897d9e1-1324-493f-a916-028c605cf186", 00:29:23.778 "is_configured": true, 00:29:23.778 "data_offset": 2048, 00:29:23.778 "data_size": 63488 00:29:23.778 }, 00:29:23.778 { 00:29:23.778 "name": "BaseBdev3", 00:29:23.778 "uuid": "83270d13-c59b-445d-8151-7dab5be32285", 00:29:23.778 "is_configured": true, 00:29:23.778 "data_offset": 2048, 00:29:23.778 "data_size": 63488 00:29:23.778 }, 00:29:23.778 { 00:29:23.778 "name": "BaseBdev4", 00:29:23.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.778 "is_configured": false, 00:29:23.778 "data_offset": 0, 00:29:23.778 "data_size": 0 00:29:23.778 } 00:29:23.778 ] 00:29:23.778 }' 00:29:23.778 01:59:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:23.778 01:59:23 -- common/autotest_common.sh@10 -- # set +x 00:29:24.712 01:59:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:29:24.712 [2024-04-24 01:59:24.782247] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:24.712 [2024-04-24 01:59:24.782723] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:29:24.712 [2024-04-24 01:59:24.782842] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:24.712 [2024-04-24 01:59:24.783030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:29:24.712 [2024-04-24 01:59:24.783393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:29:24.712 [2024-04-24 01:59:24.783434] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:29:24.712 BaseBdev4 00:29:24.712 [2024-04-24 01:59:24.783754] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.969 01:59:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:29:24.969 01:59:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:29:24.969 01:59:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:24.969 01:59:24 -- common/autotest_common.sh@887 -- # local i 00:29:24.969 01:59:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:24.969 01:59:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:24.969 01:59:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:24.969 01:59:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:25.226 [ 00:29:25.226 { 00:29:25.226 "name": "BaseBdev4", 00:29:25.226 "aliases": [ 00:29:25.226 "eefe65b2-f9f0-4c87-a0c5-a82ceee6ca5a" 00:29:25.226 ], 00:29:25.226 "product_name": "Malloc disk", 00:29:25.226 "block_size": 512, 00:29:25.226 "num_blocks": 65536, 00:29:25.226 "uuid": "eefe65b2-f9f0-4c87-a0c5-a82ceee6ca5a", 00:29:25.226 "assigned_rate_limits": { 00:29:25.226 "rw_ios_per_sec": 0, 00:29:25.226 "rw_mbytes_per_sec": 0, 00:29:25.226 "r_mbytes_per_sec": 0, 00:29:25.226 "w_mbytes_per_sec": 0 00:29:25.226 }, 00:29:25.227 "claimed": true, 00:29:25.227 "claim_type": "exclusive_write", 00:29:25.227 "zoned": false, 00:29:25.227 "supported_io_types": { 00:29:25.227 "read": true, 00:29:25.227 "write": true, 00:29:25.227 "unmap": true, 00:29:25.227 "write_zeroes": true, 00:29:25.227 "flush": true, 00:29:25.227 "reset": true, 00:29:25.227 "compare": false, 00:29:25.227 "compare_and_write": false, 00:29:25.227 "abort": true, 00:29:25.227 "nvme_admin": false, 00:29:25.227 "nvme_io": false 00:29:25.227 }, 00:29:25.227 "memory_domains": [ 00:29:25.227 { 00:29:25.227 "dma_device_id": "system", 00:29:25.227 "dma_device_type": 1 00:29:25.227 }, 00:29:25.227 { 00:29:25.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:25.227 "dma_device_type": 2 00:29:25.227 } 00:29:25.227 ], 00:29:25.227 "driver_specific": {} 00:29:25.227 } 00:29:25.227 ] 00:29:25.227 01:59:25 -- common/autotest_common.sh@893 -- # return 0 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.227 01:59:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:25.484 01:59:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:25.484 "name": "Existed_Raid", 00:29:25.484 "uuid": "5d51a095-2af3-41bc-9d08-38c256779496", 00:29:25.484 "strip_size_kb": 64, 00:29:25.484 "state": "online", 00:29:25.484 "raid_level": "concat", 00:29:25.484 "superblock": true, 00:29:25.484 "num_base_bdevs": 4, 00:29:25.484 "num_base_bdevs_discovered": 4, 00:29:25.484 "num_base_bdevs_operational": 4, 00:29:25.484 "base_bdevs_list": [ 00:29:25.484 { 00:29:25.484 "name": "BaseBdev1", 00:29:25.484 "uuid": "0ddd574b-b2f0-4f6a-a2de-0e27413568e8", 00:29:25.484 "is_configured": true, 00:29:25.484 "data_offset": 2048, 00:29:25.484 "data_size": 63488 00:29:25.484 }, 00:29:25.484 { 00:29:25.484 "name": "BaseBdev2", 00:29:25.484 "uuid": "9897d9e1-1324-493f-a916-028c605cf186", 00:29:25.484 "is_configured": true, 00:29:25.484 "data_offset": 2048, 00:29:25.484 "data_size": 63488 00:29:25.484 }, 00:29:25.484 { 00:29:25.484 "name": "BaseBdev3", 00:29:25.484 "uuid": "83270d13-c59b-445d-8151-7dab5be32285", 00:29:25.484 "is_configured": true, 00:29:25.484 "data_offset": 2048, 00:29:25.484 "data_size": 63488 00:29:25.484 }, 00:29:25.484 { 00:29:25.484 "name": "BaseBdev4", 00:29:25.484 "uuid": "eefe65b2-f9f0-4c87-a0c5-a82ceee6ca5a", 00:29:25.484 "is_configured": true, 00:29:25.484 "data_offset": 2048, 00:29:25.484 "data_size": 63488 00:29:25.484 } 00:29:25.484 ] 00:29:25.484 }' 00:29:25.484 01:59:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:25.484 01:59:25 -- common/autotest_common.sh@10 -- # set +x 00:29:26.417 01:59:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:26.417 [2024-04-24 01:59:26.394683] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:26.417 [2024-04-24 01:59:26.394914] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:26.417 [2024-04-24 01:59:26.395114] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.674 01:59:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:26.932 01:59:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:26.932 "name": "Existed_Raid", 00:29:26.932 "uuid": "5d51a095-2af3-41bc-9d08-38c256779496", 00:29:26.932 "strip_size_kb": 64, 00:29:26.932 "state": "offline", 00:29:26.932 "raid_level": "concat", 00:29:26.932 "superblock": true, 00:29:26.932 "num_base_bdevs": 4, 00:29:26.932 "num_base_bdevs_discovered": 3, 00:29:26.932 "num_base_bdevs_operational": 3, 00:29:26.932 "base_bdevs_list": [ 00:29:26.932 { 00:29:26.932 "name": null, 00:29:26.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.932 "is_configured": false, 00:29:26.932 "data_offset": 2048, 00:29:26.932 "data_size": 63488 00:29:26.932 }, 00:29:26.932 { 00:29:26.932 "name": "BaseBdev2", 00:29:26.932 "uuid": "9897d9e1-1324-493f-a916-028c605cf186", 00:29:26.932 "is_configured": true, 00:29:26.932 "data_offset": 2048, 00:29:26.932 "data_size": 63488 00:29:26.932 }, 00:29:26.932 { 00:29:26.932 "name": "BaseBdev3", 00:29:26.932 "uuid": "83270d13-c59b-445d-8151-7dab5be32285", 00:29:26.932 "is_configured": true, 00:29:26.932 "data_offset": 2048, 00:29:26.932 "data_size": 63488 00:29:26.932 }, 00:29:26.932 { 00:29:26.932 "name": "BaseBdev4", 00:29:26.932 "uuid": "eefe65b2-f9f0-4c87-a0c5-a82ceee6ca5a", 00:29:26.932 "is_configured": true, 00:29:26.932 "data_offset": 2048, 00:29:26.932 "data_size": 63488 00:29:26.932 } 00:29:26.932 ] 00:29:26.932 }' 00:29:26.932 01:59:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:26.932 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:29:27.497 01:59:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:29:27.497 01:59:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:27.497 01:59:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.497 01:59:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:27.755 01:59:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:27.755 01:59:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:27.755 01:59:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:28.014 [2024-04-24 01:59:27.973728] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:28.014 01:59:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:28.014 01:59:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:28.014 01:59:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.014 01:59:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:28.272 01:59:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:28.272 01:59:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:28.272 01:59:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:29:28.530 [2024-04-24 01:59:28.540643] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:28.787 01:59:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:28.787 01:59:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:28.787 01:59:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:28.787 01:59:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.046 01:59:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:29.046 01:59:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:29.046 01:59:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:29:29.304 [2024-04-24 01:59:29.133504] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:29.304 [2024-04-24 01:59:29.133865] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:29:29.304 01:59:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:29.304 01:59:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:29.304 01:59:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:29:29.304 01:59:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.561 01:59:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:29:29.561 01:59:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:29:29.561 01:59:29 -- bdev/bdev_raid.sh@287 -- # killprocess 128671 00:29:29.561 01:59:29 -- common/autotest_common.sh@936 -- # '[' -z 128671 ']' 00:29:29.561 01:59:29 -- common/autotest_common.sh@940 -- # kill -0 128671 00:29:29.561 01:59:29 -- common/autotest_common.sh@941 -- # uname 00:29:29.561 01:59:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:29.561 01:59:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128671 00:29:29.561 01:59:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:29.561 01:59:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:29.561 01:59:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128671' 00:29:29.561 killing process with pid 128671 00:29:29.561 01:59:29 -- common/autotest_common.sh@955 -- # kill 128671 00:29:29.561 01:59:29 -- common/autotest_common.sh@960 -- # wait 128671 00:29:29.561 [2024-04-24 01:59:29.624306] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:29.561 [2024-04-24 01:59:29.624474] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:30.936 01:59:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:29:30.936 00:29:30.936 real 0m17.533s 00:29:30.936 user 0m30.149s 00:29:30.936 sys 0m2.617s 00:29:30.936 01:59:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.936 ************************************ 00:29:30.936 END TEST raid_state_function_test_sb 00:29:30.936 ************************************ 00:29:30.936 01:59:30 -- common/autotest_common.sh@10 -- # set +x 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:29:31.195 01:59:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:31.195 01:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:31.195 01:59:31 -- common/autotest_common.sh@10 -- # set +x 00:29:31.195 ************************************ 00:29:31.195 START TEST raid_superblock_test 00:29:31.195 ************************************ 00:29:31.195 01:59:31 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 4 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=129147 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:31.195 01:59:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129147 /var/tmp/spdk-raid.sock 00:29:31.195 01:59:31 -- common/autotest_common.sh@817 -- # '[' -z 129147 ']' 00:29:31.195 01:59:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:31.195 01:59:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:31.195 01:59:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:31.195 01:59:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:31.195 01:59:31 -- common/autotest_common.sh@10 -- # set +x 00:29:31.195 [2024-04-24 01:59:31.191376] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:29:31.195 [2024-04-24 01:59:31.191838] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129147 ] 00:29:31.454 [2024-04-24 01:59:31.371767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.712 [2024-04-24 01:59:31.659253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.970 [2024-04-24 01:59:31.901524] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:32.244 01:59:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:32.244 01:59:32 -- common/autotest_common.sh@850 -- # return 0 00:29:32.244 01:59:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:29:32.244 01:59:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:29:32.244 01:59:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:29:32.244 01:59:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:29:32.244 01:59:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:32.245 01:59:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:32.245 01:59:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:29:32.245 01:59:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:32.245 01:59:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:29:32.245 malloc1 00:29:32.245 01:59:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:32.503 [2024-04-24 01:59:32.544587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:32.503 [2024-04-24 01:59:32.544883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.503 [2024-04-24 01:59:32.544955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:29:32.503 [2024-04-24 01:59:32.545204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.503 [2024-04-24 01:59:32.547826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.503 [2024-04-24 01:59:32.547990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:32.503 pt1 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:32.503 01:59:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:29:32.762 malloc2 00:29:32.762 01:59:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:33.019 [2024-04-24 01:59:33.086726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:33.020 [2024-04-24 01:59:33.087044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.020 [2024-04-24 01:59:33.087126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:33.020 [2024-04-24 01:59:33.087259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.020 [2024-04-24 01:59:33.089592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.020 [2024-04-24 01:59:33.089737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:33.020 pt2 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.278 01:59:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:29:33.535 malloc3 00:29:33.535 01:59:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:33.535 [2024-04-24 01:59:33.609519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:33.535 [2024-04-24 01:59:33.609784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.535 [2024-04-24 01:59:33.609868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:33.535 [2024-04-24 01:59:33.610000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.535 [2024-04-24 01:59:33.612516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.535 [2024-04-24 01:59:33.612689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:33.535 pt3 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:29:33.794 malloc4 00:29:33.794 01:59:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:34.360 [2024-04-24 01:59:34.138102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:34.360 [2024-04-24 01:59:34.138385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.360 [2024-04-24 01:59:34.138538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:34.360 [2024-04-24 01:59:34.138663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.360 [2024-04-24 01:59:34.141298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.360 [2024-04-24 01:59:34.141471] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:34.360 pt4 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:29:34.360 [2024-04-24 01:59:34.330265] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:34.360 [2024-04-24 01:59:34.332476] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:34.360 [2024-04-24 01:59:34.332681] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:34.360 [2024-04-24 01:59:34.332862] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:34.360 [2024-04-24 01:59:34.333125] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:29:34.360 [2024-04-24 01:59:34.333221] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:34.360 [2024-04-24 01:59:34.333454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:29:34.360 [2024-04-24 01:59:34.333887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:29:34.360 [2024-04-24 01:59:34.334001] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:29:34.360 [2024-04-24 01:59:34.334260] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.360 01:59:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.618 01:59:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:34.618 "name": "raid_bdev1", 00:29:34.618 "uuid": "d46563c6-7d33-42f9-9f68-7c01c4e09ac7", 00:29:34.618 "strip_size_kb": 64, 00:29:34.618 "state": "online", 00:29:34.618 "raid_level": "concat", 00:29:34.618 "superblock": true, 00:29:34.618 "num_base_bdevs": 4, 00:29:34.618 "num_base_bdevs_discovered": 4, 00:29:34.618 "num_base_bdevs_operational": 4, 00:29:34.618 "base_bdevs_list": [ 00:29:34.618 { 00:29:34.618 "name": "pt1", 00:29:34.618 "uuid": "f468e044-06dc-5aac-acbb-2c81785f311c", 00:29:34.618 "is_configured": true, 00:29:34.619 "data_offset": 2048, 00:29:34.619 "data_size": 63488 00:29:34.619 }, 00:29:34.619 { 00:29:34.619 "name": "pt2", 00:29:34.619 "uuid": "21299c9e-1367-5443-a37d-49e3bda5583a", 00:29:34.619 "is_configured": true, 00:29:34.619 "data_offset": 2048, 00:29:34.619 "data_size": 63488 00:29:34.619 }, 00:29:34.619 { 00:29:34.619 "name": "pt3", 00:29:34.619 "uuid": "af7b80fc-36ec-5f43-84c1-19012a29c995", 00:29:34.619 "is_configured": true, 00:29:34.619 "data_offset": 2048, 00:29:34.619 "data_size": 63488 00:29:34.619 }, 00:29:34.619 { 00:29:34.619 "name": "pt4", 00:29:34.619 "uuid": "08ff1707-f912-573a-abb8-110676d9df2c", 00:29:34.619 "is_configured": true, 00:29:34.619 "data_offset": 2048, 00:29:34.619 "data_size": 63488 00:29:34.619 } 00:29:34.619 ] 00:29:34.619 }' 00:29:34.619 01:59:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:34.619 01:59:34 -- common/autotest_common.sh@10 -- # set +x 00:29:35.185 01:59:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:35.185 01:59:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:29:35.443 [2024-04-24 01:59:35.318658] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.443 01:59:35 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d46563c6-7d33-42f9-9f68-7c01c4e09ac7 00:29:35.443 01:59:35 -- bdev/bdev_raid.sh@380 -- # '[' -z d46563c6-7d33-42f9-9f68-7c01c4e09ac7 ']' 00:29:35.443 01:59:35 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:35.702 [2024-04-24 01:59:35.598441] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.702 [2024-04-24 01:59:35.598684] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:35.702 [2024-04-24 01:59:35.598839] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:35.702 [2024-04-24 01:59:35.598961] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:35.702 [2024-04-24 01:59:35.599183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:29:35.702 01:59:35 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.702 01:59:35 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:29:35.960 01:59:35 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:29:35.960 01:59:35 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:29:35.960 01:59:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:29:35.960 01:59:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:36.219 01:59:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:29:36.219 01:59:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:36.478 01:59:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:29:36.478 01:59:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:36.478 01:59:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:29:36.478 01:59:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:36.735 01:59:36 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:36.735 01:59:36 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:36.994 01:59:36 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:29:36.994 01:59:36 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:36.994 01:59:36 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.994 01:59:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:36.994 01:59:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.994 01:59:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.994 01:59:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.994 01:59:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.994 01:59:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.994 01:59:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.994 01:59:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.994 01:59:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:36.994 01:59:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:37.252 [2024-04-24 01:59:37.110788] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:37.252 [2024-04-24 01:59:37.113146] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:37.252 [2024-04-24 01:59:37.113352] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:37.252 [2024-04-24 01:59:37.113429] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:37.252 [2024-04-24 01:59:37.113582] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:29:37.252 [2024-04-24 01:59:37.113745] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:29:37.252 [2024-04-24 01:59:37.113873] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:29:37.252 [2024-04-24 01:59:37.113959] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:29:37.252 [2024-04-24 01:59:37.114062] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:37.252 [2024-04-24 01:59:37.114149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:29:37.252 request: 00:29:37.252 { 00:29:37.252 "name": "raid_bdev1", 00:29:37.252 "raid_level": "concat", 00:29:37.252 "base_bdevs": [ 00:29:37.252 "malloc1", 00:29:37.252 "malloc2", 00:29:37.252 "malloc3", 00:29:37.252 "malloc4" 00:29:37.252 ], 00:29:37.252 "superblock": false, 00:29:37.252 "strip_size_kb": 64, 00:29:37.252 "method": "bdev_raid_create", 00:29:37.252 "req_id": 1 00:29:37.252 } 00:29:37.252 Got JSON-RPC error response 00:29:37.252 response: 00:29:37.252 { 00:29:37.252 "code": -17, 00:29:37.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:37.252 } 00:29:37.252 01:59:37 -- common/autotest_common.sh@641 -- # es=1 00:29:37.252 01:59:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:37.252 01:59:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:37.252 01:59:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:37.252 01:59:37 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:29:37.252 01:59:37 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.511 01:59:37 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:29:37.511 01:59:37 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:29:37.511 01:59:37 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:37.511 [2024-04-24 01:59:37.570873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:37.511 [2024-04-24 01:59:37.571203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:37.511 [2024-04-24 01:59:37.571277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:29:37.511 [2024-04-24 01:59:37.571387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:37.511 [2024-04-24 01:59:37.573968] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:37.511 [2024-04-24 01:59:37.574194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:37.511 [2024-04-24 01:59:37.574401] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:29:37.511 [2024-04-24 01:59:37.574547] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:37.511 pt1 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:37.770 "name": "raid_bdev1", 00:29:37.770 "uuid": "d46563c6-7d33-42f9-9f68-7c01c4e09ac7", 00:29:37.770 "strip_size_kb": 64, 00:29:37.770 "state": "configuring", 00:29:37.770 "raid_level": "concat", 00:29:37.770 "superblock": true, 00:29:37.770 "num_base_bdevs": 4, 00:29:37.770 "num_base_bdevs_discovered": 1, 00:29:37.770 "num_base_bdevs_operational": 4, 00:29:37.770 "base_bdevs_list": [ 00:29:37.770 { 00:29:37.770 "name": "pt1", 00:29:37.770 "uuid": "f468e044-06dc-5aac-acbb-2c81785f311c", 00:29:37.770 "is_configured": true, 00:29:37.770 "data_offset": 2048, 00:29:37.770 "data_size": 63488 00:29:37.770 }, 00:29:37.770 { 00:29:37.770 "name": null, 00:29:37.770 "uuid": "21299c9e-1367-5443-a37d-49e3bda5583a", 00:29:37.770 "is_configured": false, 00:29:37.770 "data_offset": 2048, 00:29:37.770 "data_size": 63488 00:29:37.770 }, 00:29:37.770 { 00:29:37.770 "name": null, 00:29:37.770 "uuid": "af7b80fc-36ec-5f43-84c1-19012a29c995", 00:29:37.770 "is_configured": false, 00:29:37.770 "data_offset": 2048, 00:29:37.770 "data_size": 63488 00:29:37.770 }, 00:29:37.770 { 00:29:37.770 "name": null, 00:29:37.770 "uuid": "08ff1707-f912-573a-abb8-110676d9df2c", 00:29:37.770 "is_configured": false, 00:29:37.770 "data_offset": 2048, 00:29:37.770 "data_size": 63488 00:29:37.770 } 00:29:37.770 ] 00:29:37.770 }' 00:29:37.770 01:59:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:37.770 01:59:37 -- common/autotest_common.sh@10 -- # set +x 00:29:38.337 01:59:38 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:29:38.337 01:59:38 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:38.601 [2024-04-24 01:59:38.595093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:38.601 [2024-04-24 01:59:38.595330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.601 [2024-04-24 01:59:38.595407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:38.601 [2024-04-24 01:59:38.595536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.601 [2024-04-24 01:59:38.596067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.601 [2024-04-24 01:59:38.596257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:38.601 [2024-04-24 01:59:38.596493] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:38.601 [2024-04-24 01:59:38.596614] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:38.601 pt2 00:29:38.601 01:59:38 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:38.867 [2024-04-24 01:59:38.787207] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:38.867 01:59:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:38.868 01:59:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.868 01:59:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.126 01:59:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:39.126 "name": "raid_bdev1", 00:29:39.126 "uuid": "d46563c6-7d33-42f9-9f68-7c01c4e09ac7", 00:29:39.126 "strip_size_kb": 64, 00:29:39.126 "state": "configuring", 00:29:39.126 "raid_level": "concat", 00:29:39.126 "superblock": true, 00:29:39.126 "num_base_bdevs": 4, 00:29:39.126 "num_base_bdevs_discovered": 1, 00:29:39.126 "num_base_bdevs_operational": 4, 00:29:39.126 "base_bdevs_list": [ 00:29:39.126 { 00:29:39.126 "name": "pt1", 00:29:39.126 "uuid": "f468e044-06dc-5aac-acbb-2c81785f311c", 00:29:39.126 "is_configured": true, 00:29:39.126 "data_offset": 2048, 00:29:39.126 "data_size": 63488 00:29:39.126 }, 00:29:39.126 { 00:29:39.126 "name": null, 00:29:39.126 "uuid": "21299c9e-1367-5443-a37d-49e3bda5583a", 00:29:39.126 "is_configured": false, 00:29:39.126 "data_offset": 2048, 00:29:39.126 "data_size": 63488 00:29:39.126 }, 00:29:39.126 { 00:29:39.126 "name": null, 00:29:39.126 "uuid": "af7b80fc-36ec-5f43-84c1-19012a29c995", 00:29:39.126 "is_configured": false, 00:29:39.126 "data_offset": 2048, 00:29:39.126 "data_size": 63488 00:29:39.126 }, 00:29:39.126 { 00:29:39.126 "name": null, 00:29:39.126 "uuid": "08ff1707-f912-573a-abb8-110676d9df2c", 00:29:39.126 "is_configured": false, 00:29:39.126 "data_offset": 2048, 00:29:39.126 "data_size": 63488 00:29:39.126 } 00:29:39.126 ] 00:29:39.126 }' 00:29:39.126 01:59:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:39.126 01:59:39 -- common/autotest_common.sh@10 -- # set +x 00:29:39.692 01:59:39 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:29:39.692 01:59:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:29:39.692 01:59:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:39.951 [2024-04-24 01:59:39.779411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:39.951 [2024-04-24 01:59:39.779649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.951 [2024-04-24 01:59:39.779728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:39.951 [2024-04-24 01:59:39.779818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.951 [2024-04-24 01:59:39.780357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.951 [2024-04-24 01:59:39.780570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:39.951 [2024-04-24 01:59:39.780764] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:29:39.951 [2024-04-24 01:59:39.780867] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:39.951 pt2 00:29:39.951 01:59:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:29:39.951 01:59:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:29:39.951 01:59:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:39.951 [2024-04-24 01:59:39.987473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:39.951 [2024-04-24 01:59:39.987741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.951 [2024-04-24 01:59:39.987814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:39.951 [2024-04-24 01:59:39.987920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.951 [2024-04-24 01:59:39.988464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.951 [2024-04-24 01:59:39.988643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:39.951 [2024-04-24 01:59:39.988894] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:29:39.951 [2024-04-24 01:59:39.989006] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:39.951 pt3 00:29:39.951 01:59:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:29:39.951 01:59:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:29:39.951 01:59:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:40.217 [2024-04-24 01:59:40.199545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:40.217 [2024-04-24 01:59:40.199854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.217 [2024-04-24 01:59:40.199934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:40.217 [2024-04-24 01:59:40.200107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.217 [2024-04-24 01:59:40.200632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.217 [2024-04-24 01:59:40.200798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:40.217 [2024-04-24 01:59:40.201000] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:29:40.217 [2024-04-24 01:59:40.201113] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:40.217 [2024-04-24 01:59:40.201282] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:29:40.217 [2024-04-24 01:59:40.201394] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:40.217 [2024-04-24 01:59:40.201549] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:40.217 [2024-04-24 01:59:40.201878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:29:40.217 [2024-04-24 01:59:40.201994] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:29:40.217 [2024-04-24 01:59:40.202226] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.217 pt4 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.217 01:59:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.478 01:59:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:40.478 "name": "raid_bdev1", 00:29:40.478 "uuid": "d46563c6-7d33-42f9-9f68-7c01c4e09ac7", 00:29:40.478 "strip_size_kb": 64, 00:29:40.478 "state": "online", 00:29:40.478 "raid_level": "concat", 00:29:40.478 "superblock": true, 00:29:40.478 "num_base_bdevs": 4, 00:29:40.478 "num_base_bdevs_discovered": 4, 00:29:40.478 "num_base_bdevs_operational": 4, 00:29:40.478 "base_bdevs_list": [ 00:29:40.478 { 00:29:40.478 "name": "pt1", 00:29:40.478 "uuid": "f468e044-06dc-5aac-acbb-2c81785f311c", 00:29:40.478 "is_configured": true, 00:29:40.478 "data_offset": 2048, 00:29:40.478 "data_size": 63488 00:29:40.478 }, 00:29:40.478 { 00:29:40.478 "name": "pt2", 00:29:40.478 "uuid": "21299c9e-1367-5443-a37d-49e3bda5583a", 00:29:40.478 "is_configured": true, 00:29:40.479 "data_offset": 2048, 00:29:40.479 "data_size": 63488 00:29:40.479 }, 00:29:40.479 { 00:29:40.479 "name": "pt3", 00:29:40.479 "uuid": "af7b80fc-36ec-5f43-84c1-19012a29c995", 00:29:40.479 "is_configured": true, 00:29:40.479 "data_offset": 2048, 00:29:40.479 "data_size": 63488 00:29:40.479 }, 00:29:40.479 { 00:29:40.479 "name": "pt4", 00:29:40.479 "uuid": "08ff1707-f912-573a-abb8-110676d9df2c", 00:29:40.479 "is_configured": true, 00:29:40.479 "data_offset": 2048, 00:29:40.479 "data_size": 63488 00:29:40.479 } 00:29:40.479 ] 00:29:40.479 }' 00:29:40.479 01:59:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:40.479 01:59:40 -- common/autotest_common.sh@10 -- # set +x 00:29:41.048 01:59:41 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:41.048 01:59:41 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:29:41.336 [2024-04-24 01:59:41.368171] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:41.336 01:59:41 -- bdev/bdev_raid.sh@430 -- # '[' d46563c6-7d33-42f9-9f68-7c01c4e09ac7 '!=' d46563c6-7d33-42f9-9f68-7c01c4e09ac7 ']' 00:29:41.336 01:59:41 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:29:41.336 01:59:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:29:41.336 01:59:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:29:41.336 01:59:41 -- bdev/bdev_raid.sh@511 -- # killprocess 129147 00:29:41.336 01:59:41 -- common/autotest_common.sh@936 -- # '[' -z 129147 ']' 00:29:41.336 01:59:41 -- common/autotest_common.sh@940 -- # kill -0 129147 00:29:41.336 01:59:41 -- common/autotest_common.sh@941 -- # uname 00:29:41.336 01:59:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.336 01:59:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129147 00:29:41.605 killing process with pid 129147 00:29:41.605 01:59:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:41.605 01:59:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:41.605 01:59:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129147' 00:29:41.605 01:59:41 -- common/autotest_common.sh@955 -- # kill 129147 00:29:41.605 01:59:41 -- common/autotest_common.sh@960 -- # wait 129147 00:29:41.605 [2024-04-24 01:59:41.423742] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:41.605 [2024-04-24 01:59:41.423863] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.605 [2024-04-24 01:59:41.423968] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.605 [2024-04-24 01:59:41.423982] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:29:41.862 [2024-04-24 01:59:41.852482] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:43.235 01:59:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:29:43.235 00:29:43.235 real 0m12.181s 00:29:43.235 user 0m20.286s 00:29:43.235 ************************************ 00:29:43.235 END TEST raid_superblock_test 00:29:43.235 ************************************ 00:29:43.235 sys 0m1.756s 00:29:43.235 01:59:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:43.235 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:29:43.493 01:59:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:43.493 01:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:43.493 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 ************************************ 00:29:43.493 START TEST raid_state_function_test 00:29:43.493 ************************************ 00:29:43.493 01:59:43 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 false 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=129485 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129485' 00:29:43.493 Process raid pid: 129485 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129485 /var/tmp/spdk-raid.sock 00:29:43.493 01:59:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:43.493 01:59:43 -- common/autotest_common.sh@817 -- # '[' -z 129485 ']' 00:29:43.493 01:59:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:43.493 01:59:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:43.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:43.493 01:59:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:43.493 01:59:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:43.493 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 [2024-04-24 01:59:43.489063] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:29:43.493 [2024-04-24 01:59:43.489487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.751 [2024-04-24 01:59:43.671006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.009 [2024-04-24 01:59:43.966502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.268 [2024-04-24 01:59:44.219372] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:44.526 01:59:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:44.526 01:59:44 -- common/autotest_common.sh@850 -- # return 0 00:29:44.526 01:59:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:44.786 [2024-04-24 01:59:44.676679] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:44.786 [2024-04-24 01:59:44.676936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:44.786 [2024-04-24 01:59:44.677076] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:44.786 [2024-04-24 01:59:44.677181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:44.786 [2024-04-24 01:59:44.677256] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:44.786 [2024-04-24 01:59:44.677329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:44.786 [2024-04-24 01:59:44.677362] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:44.786 [2024-04-24 01:59:44.677633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:44.786 01:59:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.043 01:59:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:45.043 "name": "Existed_Raid", 00:29:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.043 "strip_size_kb": 0, 00:29:45.043 "state": "configuring", 00:29:45.043 "raid_level": "raid1", 00:29:45.043 "superblock": false, 00:29:45.043 "num_base_bdevs": 4, 00:29:45.043 "num_base_bdevs_discovered": 0, 00:29:45.043 "num_base_bdevs_operational": 4, 00:29:45.043 "base_bdevs_list": [ 00:29:45.043 { 00:29:45.043 "name": "BaseBdev1", 00:29:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.043 "is_configured": false, 00:29:45.043 "data_offset": 0, 00:29:45.043 "data_size": 0 00:29:45.043 }, 00:29:45.043 { 00:29:45.043 "name": "BaseBdev2", 00:29:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.043 "is_configured": false, 00:29:45.043 "data_offset": 0, 00:29:45.043 "data_size": 0 00:29:45.043 }, 00:29:45.043 { 00:29:45.043 "name": "BaseBdev3", 00:29:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.043 "is_configured": false, 00:29:45.043 "data_offset": 0, 00:29:45.043 "data_size": 0 00:29:45.043 }, 00:29:45.043 { 00:29:45.043 "name": "BaseBdev4", 00:29:45.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.043 "is_configured": false, 00:29:45.043 "data_offset": 0, 00:29:45.043 "data_size": 0 00:29:45.043 } 00:29:45.043 ] 00:29:45.043 }' 00:29:45.043 01:59:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:45.043 01:59:44 -- common/autotest_common.sh@10 -- # set +x 00:29:45.609 01:59:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:45.867 [2024-04-24 01:59:45.856772] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:45.867 [2024-04-24 01:59:45.857030] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:29:45.867 01:59:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:46.126 [2024-04-24 01:59:46.128837] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:46.126 [2024-04-24 01:59:46.129110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:46.126 [2024-04-24 01:59:46.129214] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:46.126 [2024-04-24 01:59:46.129293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:46.126 [2024-04-24 01:59:46.129562] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:46.126 [2024-04-24 01:59:46.129643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:46.126 [2024-04-24 01:59:46.129675] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:46.126 [2024-04-24 01:59:46.129723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:46.126 01:59:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:46.384 [2024-04-24 01:59:46.408818] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.384 BaseBdev1 00:29:46.384 01:59:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:29:46.384 01:59:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:29:46.384 01:59:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:46.384 01:59:46 -- common/autotest_common.sh@887 -- # local i 00:29:46.384 01:59:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:46.384 01:59:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:46.384 01:59:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:46.643 01:59:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:46.902 [ 00:29:46.902 { 00:29:46.902 "name": "BaseBdev1", 00:29:46.902 "aliases": [ 00:29:46.902 "7511e5eb-ad94-455b-b008-ebf6b75bb28e" 00:29:46.902 ], 00:29:46.902 "product_name": "Malloc disk", 00:29:46.902 "block_size": 512, 00:29:46.902 "num_blocks": 65536, 00:29:46.902 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:46.902 "assigned_rate_limits": { 00:29:46.902 "rw_ios_per_sec": 0, 00:29:46.902 "rw_mbytes_per_sec": 0, 00:29:46.902 "r_mbytes_per_sec": 0, 00:29:46.902 "w_mbytes_per_sec": 0 00:29:46.902 }, 00:29:46.902 "claimed": true, 00:29:46.902 "claim_type": "exclusive_write", 00:29:46.902 "zoned": false, 00:29:46.902 "supported_io_types": { 00:29:46.902 "read": true, 00:29:46.902 "write": true, 00:29:46.902 "unmap": true, 00:29:46.902 "write_zeroes": true, 00:29:46.902 "flush": true, 00:29:46.902 "reset": true, 00:29:46.902 "compare": false, 00:29:46.902 "compare_and_write": false, 00:29:46.902 "abort": true, 00:29:46.902 "nvme_admin": false, 00:29:46.902 "nvme_io": false 00:29:46.902 }, 00:29:46.902 "memory_domains": [ 00:29:46.902 { 00:29:46.902 "dma_device_id": "system", 00:29:46.902 "dma_device_type": 1 00:29:46.902 }, 00:29:46.902 { 00:29:46.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:46.902 "dma_device_type": 2 00:29:46.902 } 00:29:46.902 ], 00:29:46.902 "driver_specific": {} 00:29:46.902 } 00:29:46.902 ] 00:29:46.902 01:59:46 -- common/autotest_common.sh@893 -- # return 0 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.902 01:59:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:47.161 01:59:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:47.161 "name": "Existed_Raid", 00:29:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.161 "strip_size_kb": 0, 00:29:47.161 "state": "configuring", 00:29:47.161 "raid_level": "raid1", 00:29:47.161 "superblock": false, 00:29:47.161 "num_base_bdevs": 4, 00:29:47.161 "num_base_bdevs_discovered": 1, 00:29:47.161 "num_base_bdevs_operational": 4, 00:29:47.161 "base_bdevs_list": [ 00:29:47.161 { 00:29:47.161 "name": "BaseBdev1", 00:29:47.161 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:47.161 "is_configured": true, 00:29:47.161 "data_offset": 0, 00:29:47.161 "data_size": 65536 00:29:47.161 }, 00:29:47.161 { 00:29:47.161 "name": "BaseBdev2", 00:29:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.161 "is_configured": false, 00:29:47.161 "data_offset": 0, 00:29:47.161 "data_size": 0 00:29:47.161 }, 00:29:47.161 { 00:29:47.161 "name": "BaseBdev3", 00:29:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.161 "is_configured": false, 00:29:47.161 "data_offset": 0, 00:29:47.161 "data_size": 0 00:29:47.161 }, 00:29:47.161 { 00:29:47.161 "name": "BaseBdev4", 00:29:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.161 "is_configured": false, 00:29:47.161 "data_offset": 0, 00:29:47.161 "data_size": 0 00:29:47.161 } 00:29:47.161 ] 00:29:47.161 }' 00:29:47.161 01:59:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:47.161 01:59:47 -- common/autotest_common.sh@10 -- # set +x 00:29:47.728 01:59:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:47.987 [2024-04-24 01:59:48.045224] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:47.987 [2024-04-24 01:59:48.045477] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:29:47.987 01:59:48 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:29:47.987 01:59:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:48.387 [2024-04-24 01:59:48.321286] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:48.387 [2024-04-24 01:59:48.323658] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:48.387 [2024-04-24 01:59:48.323859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:48.387 [2024-04-24 01:59:48.323947] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:48.387 [2024-04-24 01:59:48.324053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:48.387 [2024-04-24 01:59:48.324180] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:48.387 [2024-04-24 01:59:48.324235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:48.387 01:59:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.660 01:59:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:48.660 "name": "Existed_Raid", 00:29:48.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.660 "strip_size_kb": 0, 00:29:48.660 "state": "configuring", 00:29:48.660 "raid_level": "raid1", 00:29:48.660 "superblock": false, 00:29:48.660 "num_base_bdevs": 4, 00:29:48.660 "num_base_bdevs_discovered": 1, 00:29:48.660 "num_base_bdevs_operational": 4, 00:29:48.660 "base_bdevs_list": [ 00:29:48.660 { 00:29:48.660 "name": "BaseBdev1", 00:29:48.660 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:48.660 "is_configured": true, 00:29:48.660 "data_offset": 0, 00:29:48.661 "data_size": 65536 00:29:48.661 }, 00:29:48.661 { 00:29:48.661 "name": "BaseBdev2", 00:29:48.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.661 "is_configured": false, 00:29:48.661 "data_offset": 0, 00:29:48.661 "data_size": 0 00:29:48.661 }, 00:29:48.661 { 00:29:48.661 "name": "BaseBdev3", 00:29:48.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.661 "is_configured": false, 00:29:48.661 "data_offset": 0, 00:29:48.661 "data_size": 0 00:29:48.661 }, 00:29:48.661 { 00:29:48.661 "name": "BaseBdev4", 00:29:48.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.661 "is_configured": false, 00:29:48.661 "data_offset": 0, 00:29:48.661 "data_size": 0 00:29:48.661 } 00:29:48.661 ] 00:29:48.661 }' 00:29:48.661 01:59:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:48.661 01:59:48 -- common/autotest_common.sh@10 -- # set +x 00:29:49.286 01:59:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:29:49.545 [2024-04-24 01:59:49.583829] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:49.545 BaseBdev2 00:29:49.545 01:59:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:29:49.545 01:59:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:29:49.545 01:59:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:49.545 01:59:49 -- common/autotest_common.sh@887 -- # local i 00:29:49.545 01:59:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:49.545 01:59:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:49.545 01:59:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:49.804 01:59:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:50.062 [ 00:29:50.062 { 00:29:50.062 "name": "BaseBdev2", 00:29:50.062 "aliases": [ 00:29:50.062 "ff09bb60-d3d3-4115-85b2-360176dc27da" 00:29:50.062 ], 00:29:50.062 "product_name": "Malloc disk", 00:29:50.062 "block_size": 512, 00:29:50.062 "num_blocks": 65536, 00:29:50.062 "uuid": "ff09bb60-d3d3-4115-85b2-360176dc27da", 00:29:50.062 "assigned_rate_limits": { 00:29:50.062 "rw_ios_per_sec": 0, 00:29:50.062 "rw_mbytes_per_sec": 0, 00:29:50.062 "r_mbytes_per_sec": 0, 00:29:50.062 "w_mbytes_per_sec": 0 00:29:50.062 }, 00:29:50.062 "claimed": true, 00:29:50.062 "claim_type": "exclusive_write", 00:29:50.062 "zoned": false, 00:29:50.062 "supported_io_types": { 00:29:50.062 "read": true, 00:29:50.062 "write": true, 00:29:50.062 "unmap": true, 00:29:50.062 "write_zeroes": true, 00:29:50.062 "flush": true, 00:29:50.062 "reset": true, 00:29:50.062 "compare": false, 00:29:50.062 "compare_and_write": false, 00:29:50.062 "abort": true, 00:29:50.062 "nvme_admin": false, 00:29:50.062 "nvme_io": false 00:29:50.062 }, 00:29:50.062 "memory_domains": [ 00:29:50.062 { 00:29:50.062 "dma_device_id": "system", 00:29:50.062 "dma_device_type": 1 00:29:50.062 }, 00:29:50.062 { 00:29:50.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.062 "dma_device_type": 2 00:29:50.062 } 00:29:50.062 ], 00:29:50.062 "driver_specific": {} 00:29:50.062 } 00:29:50.062 ] 00:29:50.062 01:59:50 -- common/autotest_common.sh@893 -- # return 0 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.062 01:59:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:50.320 01:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:50.320 "name": "Existed_Raid", 00:29:50.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.320 "strip_size_kb": 0, 00:29:50.320 "state": "configuring", 00:29:50.320 "raid_level": "raid1", 00:29:50.320 "superblock": false, 00:29:50.320 "num_base_bdevs": 4, 00:29:50.320 "num_base_bdevs_discovered": 2, 00:29:50.320 "num_base_bdevs_operational": 4, 00:29:50.320 "base_bdevs_list": [ 00:29:50.320 { 00:29:50.320 "name": "BaseBdev1", 00:29:50.320 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:50.320 "is_configured": true, 00:29:50.320 "data_offset": 0, 00:29:50.320 "data_size": 65536 00:29:50.320 }, 00:29:50.320 { 00:29:50.320 "name": "BaseBdev2", 00:29:50.320 "uuid": "ff09bb60-d3d3-4115-85b2-360176dc27da", 00:29:50.320 "is_configured": true, 00:29:50.320 "data_offset": 0, 00:29:50.320 "data_size": 65536 00:29:50.320 }, 00:29:50.320 { 00:29:50.320 "name": "BaseBdev3", 00:29:50.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.320 "is_configured": false, 00:29:50.320 "data_offset": 0, 00:29:50.320 "data_size": 0 00:29:50.320 }, 00:29:50.320 { 00:29:50.320 "name": "BaseBdev4", 00:29:50.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.320 "is_configured": false, 00:29:50.320 "data_offset": 0, 00:29:50.320 "data_size": 0 00:29:50.320 } 00:29:50.320 ] 00:29:50.320 }' 00:29:50.320 01:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:50.320 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:29:51.253 01:59:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:29:51.254 [2024-04-24 01:59:51.265027] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:51.254 BaseBdev3 00:29:51.254 01:59:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:29:51.254 01:59:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:29:51.254 01:59:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:51.254 01:59:51 -- common/autotest_common.sh@887 -- # local i 00:29:51.254 01:59:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:51.254 01:59:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:51.254 01:59:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:51.513 01:59:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:51.772 [ 00:29:51.772 { 00:29:51.772 "name": "BaseBdev3", 00:29:51.772 "aliases": [ 00:29:51.772 "856cbcb8-2f3a-497d-9413-d81c81ea3322" 00:29:51.772 ], 00:29:51.772 "product_name": "Malloc disk", 00:29:51.772 "block_size": 512, 00:29:51.772 "num_blocks": 65536, 00:29:51.772 "uuid": "856cbcb8-2f3a-497d-9413-d81c81ea3322", 00:29:51.772 "assigned_rate_limits": { 00:29:51.772 "rw_ios_per_sec": 0, 00:29:51.772 "rw_mbytes_per_sec": 0, 00:29:51.772 "r_mbytes_per_sec": 0, 00:29:51.772 "w_mbytes_per_sec": 0 00:29:51.772 }, 00:29:51.772 "claimed": true, 00:29:51.772 "claim_type": "exclusive_write", 00:29:51.772 "zoned": false, 00:29:51.772 "supported_io_types": { 00:29:51.772 "read": true, 00:29:51.772 "write": true, 00:29:51.772 "unmap": true, 00:29:51.772 "write_zeroes": true, 00:29:51.772 "flush": true, 00:29:51.772 "reset": true, 00:29:51.772 "compare": false, 00:29:51.772 "compare_and_write": false, 00:29:51.772 "abort": true, 00:29:51.772 "nvme_admin": false, 00:29:51.772 "nvme_io": false 00:29:51.772 }, 00:29:51.772 "memory_domains": [ 00:29:51.772 { 00:29:51.772 "dma_device_id": "system", 00:29:51.772 "dma_device_type": 1 00:29:51.772 }, 00:29:51.772 { 00:29:51.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:51.772 "dma_device_type": 2 00:29:51.772 } 00:29:51.772 ], 00:29:51.772 "driver_specific": {} 00:29:51.772 } 00:29:51.772 ] 00:29:51.772 01:59:51 -- common/autotest_common.sh@893 -- # return 0 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:51.772 01:59:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.031 01:59:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:52.031 "name": "Existed_Raid", 00:29:52.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.031 "strip_size_kb": 0, 00:29:52.031 "state": "configuring", 00:29:52.031 "raid_level": "raid1", 00:29:52.031 "superblock": false, 00:29:52.031 "num_base_bdevs": 4, 00:29:52.031 "num_base_bdevs_discovered": 3, 00:29:52.031 "num_base_bdevs_operational": 4, 00:29:52.031 "base_bdevs_list": [ 00:29:52.031 { 00:29:52.031 "name": "BaseBdev1", 00:29:52.031 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:52.031 "is_configured": true, 00:29:52.031 "data_offset": 0, 00:29:52.031 "data_size": 65536 00:29:52.031 }, 00:29:52.031 { 00:29:52.031 "name": "BaseBdev2", 00:29:52.031 "uuid": "ff09bb60-d3d3-4115-85b2-360176dc27da", 00:29:52.031 "is_configured": true, 00:29:52.031 "data_offset": 0, 00:29:52.031 "data_size": 65536 00:29:52.031 }, 00:29:52.031 { 00:29:52.031 "name": "BaseBdev3", 00:29:52.031 "uuid": "856cbcb8-2f3a-497d-9413-d81c81ea3322", 00:29:52.031 "is_configured": true, 00:29:52.031 "data_offset": 0, 00:29:52.031 "data_size": 65536 00:29:52.031 }, 00:29:52.031 { 00:29:52.031 "name": "BaseBdev4", 00:29:52.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.031 "is_configured": false, 00:29:52.031 "data_offset": 0, 00:29:52.031 "data_size": 0 00:29:52.031 } 00:29:52.031 ] 00:29:52.031 }' 00:29:52.031 01:59:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:52.031 01:59:52 -- common/autotest_common.sh@10 -- # set +x 00:29:52.597 01:59:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:29:52.860 [2024-04-24 01:59:52.922163] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:52.860 [2024-04-24 01:59:52.922501] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:29:52.860 [2024-04-24 01:59:52.922551] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:52.860 [2024-04-24 01:59:52.922773] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:29:52.860 [2024-04-24 01:59:52.923261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:29:52.860 [2024-04-24 01:59:52.923380] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:29:52.860 [2024-04-24 01:59:52.923777] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.860 BaseBdev4 00:29:52.860 01:59:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:29:52.860 01:59:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:29:52.860 01:59:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:52.860 01:59:52 -- common/autotest_common.sh@887 -- # local i 00:29:52.860 01:59:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:52.860 01:59:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:52.860 01:59:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:53.425 01:59:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:53.425 [ 00:29:53.425 { 00:29:53.425 "name": "BaseBdev4", 00:29:53.425 "aliases": [ 00:29:53.425 "43fc75a3-3102-40ec-bc0a-09f7f428d25a" 00:29:53.425 ], 00:29:53.425 "product_name": "Malloc disk", 00:29:53.425 "block_size": 512, 00:29:53.425 "num_blocks": 65536, 00:29:53.425 "uuid": "43fc75a3-3102-40ec-bc0a-09f7f428d25a", 00:29:53.425 "assigned_rate_limits": { 00:29:53.425 "rw_ios_per_sec": 0, 00:29:53.425 "rw_mbytes_per_sec": 0, 00:29:53.425 "r_mbytes_per_sec": 0, 00:29:53.425 "w_mbytes_per_sec": 0 00:29:53.425 }, 00:29:53.425 "claimed": true, 00:29:53.425 "claim_type": "exclusive_write", 00:29:53.425 "zoned": false, 00:29:53.425 "supported_io_types": { 00:29:53.425 "read": true, 00:29:53.425 "write": true, 00:29:53.425 "unmap": true, 00:29:53.425 "write_zeroes": true, 00:29:53.425 "flush": true, 00:29:53.425 "reset": true, 00:29:53.425 "compare": false, 00:29:53.425 "compare_and_write": false, 00:29:53.425 "abort": true, 00:29:53.425 "nvme_admin": false, 00:29:53.425 "nvme_io": false 00:29:53.425 }, 00:29:53.425 "memory_domains": [ 00:29:53.425 { 00:29:53.425 "dma_device_id": "system", 00:29:53.425 "dma_device_type": 1 00:29:53.425 }, 00:29:53.425 { 00:29:53.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:53.425 "dma_device_type": 2 00:29:53.425 } 00:29:53.425 ], 00:29:53.425 "driver_specific": {} 00:29:53.425 } 00:29:53.425 ] 00:29:53.425 01:59:53 -- common/autotest_common.sh@893 -- # return 0 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.425 01:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:53.683 01:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:53.683 "name": "Existed_Raid", 00:29:53.683 "uuid": "02daf3ea-f2bf-466d-b76d-d8738ea8dd03", 00:29:53.683 "strip_size_kb": 0, 00:29:53.683 "state": "online", 00:29:53.683 "raid_level": "raid1", 00:29:53.683 "superblock": false, 00:29:53.683 "num_base_bdevs": 4, 00:29:53.683 "num_base_bdevs_discovered": 4, 00:29:53.683 "num_base_bdevs_operational": 4, 00:29:53.683 "base_bdevs_list": [ 00:29:53.683 { 00:29:53.683 "name": "BaseBdev1", 00:29:53.683 "uuid": "7511e5eb-ad94-455b-b008-ebf6b75bb28e", 00:29:53.683 "is_configured": true, 00:29:53.683 "data_offset": 0, 00:29:53.683 "data_size": 65536 00:29:53.683 }, 00:29:53.683 { 00:29:53.683 "name": "BaseBdev2", 00:29:53.683 "uuid": "ff09bb60-d3d3-4115-85b2-360176dc27da", 00:29:53.683 "is_configured": true, 00:29:53.683 "data_offset": 0, 00:29:53.683 "data_size": 65536 00:29:53.683 }, 00:29:53.683 { 00:29:53.683 "name": "BaseBdev3", 00:29:53.683 "uuid": "856cbcb8-2f3a-497d-9413-d81c81ea3322", 00:29:53.683 "is_configured": true, 00:29:53.683 "data_offset": 0, 00:29:53.683 "data_size": 65536 00:29:53.683 }, 00:29:53.683 { 00:29:53.683 "name": "BaseBdev4", 00:29:53.683 "uuid": "43fc75a3-3102-40ec-bc0a-09f7f428d25a", 00:29:53.683 "is_configured": true, 00:29:53.683 "data_offset": 0, 00:29:53.683 "data_size": 65536 00:29:53.683 } 00:29:53.683 ] 00:29:53.683 }' 00:29:53.683 01:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:53.683 01:59:53 -- common/autotest_common.sh@10 -- # set +x 00:29:54.251 01:59:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:54.509 [2024-04-24 01:59:54.526677] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.767 01:59:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:55.050 01:59:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:55.050 "name": "Existed_Raid", 00:29:55.050 "uuid": "02daf3ea-f2bf-466d-b76d-d8738ea8dd03", 00:29:55.050 "strip_size_kb": 0, 00:29:55.050 "state": "online", 00:29:55.050 "raid_level": "raid1", 00:29:55.050 "superblock": false, 00:29:55.050 "num_base_bdevs": 4, 00:29:55.050 "num_base_bdevs_discovered": 3, 00:29:55.050 "num_base_bdevs_operational": 3, 00:29:55.050 "base_bdevs_list": [ 00:29:55.050 { 00:29:55.050 "name": null, 00:29:55.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.050 "is_configured": false, 00:29:55.050 "data_offset": 0, 00:29:55.050 "data_size": 65536 00:29:55.050 }, 00:29:55.050 { 00:29:55.050 "name": "BaseBdev2", 00:29:55.050 "uuid": "ff09bb60-d3d3-4115-85b2-360176dc27da", 00:29:55.050 "is_configured": true, 00:29:55.050 "data_offset": 0, 00:29:55.050 "data_size": 65536 00:29:55.050 }, 00:29:55.050 { 00:29:55.050 "name": "BaseBdev3", 00:29:55.050 "uuid": "856cbcb8-2f3a-497d-9413-d81c81ea3322", 00:29:55.050 "is_configured": true, 00:29:55.050 "data_offset": 0, 00:29:55.050 "data_size": 65536 00:29:55.050 }, 00:29:55.050 { 00:29:55.050 "name": "BaseBdev4", 00:29:55.050 "uuid": "43fc75a3-3102-40ec-bc0a-09f7f428d25a", 00:29:55.050 "is_configured": true, 00:29:55.050 "data_offset": 0, 00:29:55.050 "data_size": 65536 00:29:55.050 } 00:29:55.050 ] 00:29:55.050 }' 00:29:55.050 01:59:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:55.050 01:59:54 -- common/autotest_common.sh@10 -- # set +x 00:29:55.616 01:59:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:29:55.616 01:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:55.616 01:59:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.616 01:59:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:55.875 01:59:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:55.875 01:59:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:55.875 01:59:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:55.875 [2024-04-24 01:59:55.921739] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:56.134 01:59:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:56.134 01:59:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:56.134 01:59:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.134 01:59:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:56.393 01:59:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:56.393 01:59:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:56.393 01:59:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:29:56.393 [2024-04-24 01:59:56.432609] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:56.652 01:59:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:56.652 01:59:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:56.652 01:59:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.652 01:59:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:29:56.912 01:59:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:29:56.912 01:59:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:56.912 01:59:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:29:57.171 [2024-04-24 01:59:56.998702] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:57.171 [2024-04-24 01:59:56.999048] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:57.171 [2024-04-24 01:59:57.104419] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:57.171 [2024-04-24 01:59:57.104727] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:57.171 [2024-04-24 01:59:57.104821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:29:57.171 01:59:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:29:57.171 01:59:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:29:57.171 01:59:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.171 01:59:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:29:57.430 01:59:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:29:57.430 01:59:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:29:57.430 01:59:57 -- bdev/bdev_raid.sh@287 -- # killprocess 129485 00:29:57.430 01:59:57 -- common/autotest_common.sh@936 -- # '[' -z 129485 ']' 00:29:57.430 01:59:57 -- common/autotest_common.sh@940 -- # kill -0 129485 00:29:57.430 01:59:57 -- common/autotest_common.sh@941 -- # uname 00:29:57.430 01:59:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:57.430 01:59:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129485 00:29:57.430 killing process with pid 129485 00:29:57.430 01:59:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:57.430 01:59:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:57.430 01:59:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129485' 00:29:57.430 01:59:57 -- common/autotest_common.sh@955 -- # kill 129485 00:29:57.430 01:59:57 -- common/autotest_common.sh@960 -- # wait 129485 00:29:57.430 [2024-04-24 01:59:57.460622] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:57.430 [2024-04-24 01:59:57.460766] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:58.809 01:59:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:29:58.809 00:29:58.809 real 0m15.459s 00:29:58.809 user 0m26.655s 00:29:58.809 sys 0m2.168s 00:29:58.809 01:59:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:58.809 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:29:58.809 ************************************ 00:29:58.809 END TEST raid_state_function_test 00:29:58.809 ************************************ 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:29:59.068 01:59:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:59.068 01:59:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:59.068 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:29:59.068 ************************************ 00:29:59.068 START TEST raid_state_function_test_sb 00:29:59.068 ************************************ 00:29:59.068 01:59:58 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 true 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=129938 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129938' 00:29:59.068 Process raid pid: 129938 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129938 /var/tmp/spdk-raid.sock 00:29:59.068 01:59:58 -- common/autotest_common.sh@817 -- # '[' -z 129938 ']' 00:29:59.068 01:59:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:59.068 01:59:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:59.068 01:59:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:59.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:59.068 01:59:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:59.068 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:29:59.068 01:59:58 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:59.068 [2024-04-24 01:59:59.048873] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:29:59.068 [2024-04-24 01:59:59.049422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.327 [2024-04-24 01:59:59.222260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.586 [2024-04-24 01:59:59.516806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.844 [2024-04-24 01:59:59.805239] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:00.256 02:00:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:00.256 02:00:00 -- common/autotest_common.sh@850 -- # return 0 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:00.256 [2024-04-24 02:00:00.316384] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:00.256 [2024-04-24 02:00:00.316671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:00.256 [2024-04-24 02:00:00.316803] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:00.256 [2024-04-24 02:00:00.316871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:00.256 [2024-04-24 02:00:00.316980] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:00.256 [2024-04-24 02:00:00.317055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:00.256 [2024-04-24 02:00:00.317203] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:00.256 [2024-04-24 02:00:00.317262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:00.256 02:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:00.516 02:00:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.516 02:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.516 02:00:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:00.516 "name": "Existed_Raid", 00:30:00.516 "uuid": "2f3fc55e-e499-4382-b563-9e006289386d", 00:30:00.516 "strip_size_kb": 0, 00:30:00.516 "state": "configuring", 00:30:00.516 "raid_level": "raid1", 00:30:00.516 "superblock": true, 00:30:00.516 "num_base_bdevs": 4, 00:30:00.516 "num_base_bdevs_discovered": 0, 00:30:00.516 "num_base_bdevs_operational": 4, 00:30:00.516 "base_bdevs_list": [ 00:30:00.516 { 00:30:00.516 "name": "BaseBdev1", 00:30:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.516 "is_configured": false, 00:30:00.516 "data_offset": 0, 00:30:00.516 "data_size": 0 00:30:00.516 }, 00:30:00.516 { 00:30:00.516 "name": "BaseBdev2", 00:30:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.516 "is_configured": false, 00:30:00.516 "data_offset": 0, 00:30:00.516 "data_size": 0 00:30:00.516 }, 00:30:00.516 { 00:30:00.516 "name": "BaseBdev3", 00:30:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.516 "is_configured": false, 00:30:00.516 "data_offset": 0, 00:30:00.516 "data_size": 0 00:30:00.516 }, 00:30:00.516 { 00:30:00.516 "name": "BaseBdev4", 00:30:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.516 "is_configured": false, 00:30:00.516 "data_offset": 0, 00:30:00.516 "data_size": 0 00:30:00.516 } 00:30:00.516 ] 00:30:00.516 }' 00:30:00.516 02:00:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:00.516 02:00:00 -- common/autotest_common.sh@10 -- # set +x 00:30:01.449 02:00:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:01.449 [2024-04-24 02:00:01.460448] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:01.449 [2024-04-24 02:00:01.460731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:30:01.449 02:00:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:01.706 [2024-04-24 02:00:01.732560] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:01.706 [2024-04-24 02:00:01.732902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:01.706 [2024-04-24 02:00:01.733024] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:01.706 [2024-04-24 02:00:01.733089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:01.706 [2024-04-24 02:00:01.733171] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:01.706 [2024-04-24 02:00:01.733247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:01.706 [2024-04-24 02:00:01.733276] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:01.706 [2024-04-24 02:00:01.733371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:01.706 02:00:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:02.267 [2024-04-24 02:00:02.059187] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:02.267 BaseBdev1 00:30:02.267 02:00:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:30:02.267 02:00:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:02.267 02:00:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:02.267 02:00:02 -- common/autotest_common.sh@887 -- # local i 00:30:02.267 02:00:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:02.267 02:00:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:02.267 02:00:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:02.525 02:00:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:02.784 [ 00:30:02.784 { 00:30:02.784 "name": "BaseBdev1", 00:30:02.784 "aliases": [ 00:30:02.784 "f881defd-1bf0-43b3-af84-6e270d706e82" 00:30:02.784 ], 00:30:02.784 "product_name": "Malloc disk", 00:30:02.784 "block_size": 512, 00:30:02.784 "num_blocks": 65536, 00:30:02.784 "uuid": "f881defd-1bf0-43b3-af84-6e270d706e82", 00:30:02.784 "assigned_rate_limits": { 00:30:02.784 "rw_ios_per_sec": 0, 00:30:02.784 "rw_mbytes_per_sec": 0, 00:30:02.784 "r_mbytes_per_sec": 0, 00:30:02.784 "w_mbytes_per_sec": 0 00:30:02.784 }, 00:30:02.784 "claimed": true, 00:30:02.784 "claim_type": "exclusive_write", 00:30:02.784 "zoned": false, 00:30:02.784 "supported_io_types": { 00:30:02.784 "read": true, 00:30:02.784 "write": true, 00:30:02.784 "unmap": true, 00:30:02.784 "write_zeroes": true, 00:30:02.784 "flush": true, 00:30:02.784 "reset": true, 00:30:02.784 "compare": false, 00:30:02.784 "compare_and_write": false, 00:30:02.784 "abort": true, 00:30:02.784 "nvme_admin": false, 00:30:02.784 "nvme_io": false 00:30:02.784 }, 00:30:02.784 "memory_domains": [ 00:30:02.784 { 00:30:02.784 "dma_device_id": "system", 00:30:02.784 "dma_device_type": 1 00:30:02.784 }, 00:30:02.784 { 00:30:02.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.784 "dma_device_type": 2 00:30:02.784 } 00:30:02.784 ], 00:30:02.784 "driver_specific": {} 00:30:02.784 } 00:30:02.784 ] 00:30:02.784 02:00:02 -- common/autotest_common.sh@893 -- # return 0 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.784 02:00:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:02.784 "name": "Existed_Raid", 00:30:02.784 "uuid": "bf9f46ff-7b60-43bd-a6a5-7815644f7f55", 00:30:02.784 "strip_size_kb": 0, 00:30:02.784 "state": "configuring", 00:30:02.784 "raid_level": "raid1", 00:30:02.784 "superblock": true, 00:30:02.784 "num_base_bdevs": 4, 00:30:02.784 "num_base_bdevs_discovered": 1, 00:30:02.784 "num_base_bdevs_operational": 4, 00:30:02.784 "base_bdevs_list": [ 00:30:02.784 { 00:30:02.784 "name": "BaseBdev1", 00:30:02.784 "uuid": "f881defd-1bf0-43b3-af84-6e270d706e82", 00:30:02.784 "is_configured": true, 00:30:02.784 "data_offset": 2048, 00:30:02.784 "data_size": 63488 00:30:02.784 }, 00:30:02.784 { 00:30:02.784 "name": "BaseBdev2", 00:30:02.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.784 "is_configured": false, 00:30:02.784 "data_offset": 0, 00:30:02.784 "data_size": 0 00:30:02.784 }, 00:30:02.784 { 00:30:02.784 "name": "BaseBdev3", 00:30:02.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.784 "is_configured": false, 00:30:02.784 "data_offset": 0, 00:30:02.784 "data_size": 0 00:30:02.784 }, 00:30:02.784 { 00:30:02.784 "name": "BaseBdev4", 00:30:02.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.784 "is_configured": false, 00:30:02.784 "data_offset": 0, 00:30:02.784 "data_size": 0 00:30:02.784 } 00:30:02.784 ] 00:30:02.784 }' 00:30:02.785 02:00:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:02.785 02:00:02 -- common/autotest_common.sh@10 -- # set +x 00:30:03.721 02:00:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:03.980 [2024-04-24 02:00:03.811627] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:03.980 [2024-04-24 02:00:03.811873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:30:03.980 02:00:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:30:03.980 02:00:03 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:04.238 02:00:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:04.506 BaseBdev1 00:30:04.506 02:00:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:30:04.506 02:00:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:04.506 02:00:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:04.506 02:00:04 -- common/autotest_common.sh@887 -- # local i 00:30:04.506 02:00:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:04.506 02:00:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:04.506 02:00:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:04.776 02:00:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:05.034 [ 00:30:05.034 { 00:30:05.034 "name": "BaseBdev1", 00:30:05.034 "aliases": [ 00:30:05.034 "5140850d-7fbd-4714-80dd-9957b113b96a" 00:30:05.034 ], 00:30:05.034 "product_name": "Malloc disk", 00:30:05.034 "block_size": 512, 00:30:05.034 "num_blocks": 65536, 00:30:05.034 "uuid": "5140850d-7fbd-4714-80dd-9957b113b96a", 00:30:05.034 "assigned_rate_limits": { 00:30:05.034 "rw_ios_per_sec": 0, 00:30:05.034 "rw_mbytes_per_sec": 0, 00:30:05.034 "r_mbytes_per_sec": 0, 00:30:05.034 "w_mbytes_per_sec": 0 00:30:05.034 }, 00:30:05.034 "claimed": false, 00:30:05.034 "zoned": false, 00:30:05.034 "supported_io_types": { 00:30:05.034 "read": true, 00:30:05.034 "write": true, 00:30:05.034 "unmap": true, 00:30:05.034 "write_zeroes": true, 00:30:05.034 "flush": true, 00:30:05.034 "reset": true, 00:30:05.034 "compare": false, 00:30:05.034 "compare_and_write": false, 00:30:05.034 "abort": true, 00:30:05.034 "nvme_admin": false, 00:30:05.034 "nvme_io": false 00:30:05.034 }, 00:30:05.034 "memory_domains": [ 00:30:05.034 { 00:30:05.034 "dma_device_id": "system", 00:30:05.034 "dma_device_type": 1 00:30:05.034 }, 00:30:05.034 { 00:30:05.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:05.034 "dma_device_type": 2 00:30:05.034 } 00:30:05.034 ], 00:30:05.034 "driver_specific": {} 00:30:05.034 } 00:30:05.034 ] 00:30:05.034 02:00:05 -- common/autotest_common.sh@893 -- # return 0 00:30:05.034 02:00:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:05.293 [2024-04-24 02:00:05.298240] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:05.293 [2024-04-24 02:00:05.300645] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:05.293 [2024-04-24 02:00:05.300876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:05.293 [2024-04-24 02:00:05.300989] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:05.293 [2024-04-24 02:00:05.301054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:05.293 [2024-04-24 02:00:05.301231] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:05.293 [2024-04-24 02:00:05.301288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.293 02:00:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.551 02:00:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:05.551 "name": "Existed_Raid", 00:30:05.551 "uuid": "e1c99775-cf89-477e-b862-4e5834576f37", 00:30:05.551 "strip_size_kb": 0, 00:30:05.551 "state": "configuring", 00:30:05.551 "raid_level": "raid1", 00:30:05.551 "superblock": true, 00:30:05.551 "num_base_bdevs": 4, 00:30:05.551 "num_base_bdevs_discovered": 1, 00:30:05.551 "num_base_bdevs_operational": 4, 00:30:05.551 "base_bdevs_list": [ 00:30:05.551 { 00:30:05.551 "name": "BaseBdev1", 00:30:05.551 "uuid": "5140850d-7fbd-4714-80dd-9957b113b96a", 00:30:05.551 "is_configured": true, 00:30:05.551 "data_offset": 2048, 00:30:05.551 "data_size": 63488 00:30:05.551 }, 00:30:05.551 { 00:30:05.551 "name": "BaseBdev2", 00:30:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.551 "is_configured": false, 00:30:05.551 "data_offset": 0, 00:30:05.551 "data_size": 0 00:30:05.551 }, 00:30:05.551 { 00:30:05.551 "name": "BaseBdev3", 00:30:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.551 "is_configured": false, 00:30:05.551 "data_offset": 0, 00:30:05.551 "data_size": 0 00:30:05.551 }, 00:30:05.551 { 00:30:05.551 "name": "BaseBdev4", 00:30:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.551 "is_configured": false, 00:30:05.551 "data_offset": 0, 00:30:05.551 "data_size": 0 00:30:05.551 } 00:30:05.551 ] 00:30:05.551 }' 00:30:05.551 02:00:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:05.551 02:00:05 -- common/autotest_common.sh@10 -- # set +x 00:30:06.484 02:00:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:06.484 [2024-04-24 02:00:06.560963] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:06.484 BaseBdev2 00:30:06.741 02:00:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:30:06.741 02:00:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:30:06.741 02:00:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:06.741 02:00:06 -- common/autotest_common.sh@887 -- # local i 00:30:06.742 02:00:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:06.742 02:00:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:06.742 02:00:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:06.742 02:00:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:07.000 [ 00:30:07.000 { 00:30:07.000 "name": "BaseBdev2", 00:30:07.000 "aliases": [ 00:30:07.000 "1e6865e9-9747-45a2-abcf-a9d3bf0300f6" 00:30:07.000 ], 00:30:07.000 "product_name": "Malloc disk", 00:30:07.000 "block_size": 512, 00:30:07.000 "num_blocks": 65536, 00:30:07.000 "uuid": "1e6865e9-9747-45a2-abcf-a9d3bf0300f6", 00:30:07.000 "assigned_rate_limits": { 00:30:07.000 "rw_ios_per_sec": 0, 00:30:07.000 "rw_mbytes_per_sec": 0, 00:30:07.000 "r_mbytes_per_sec": 0, 00:30:07.000 "w_mbytes_per_sec": 0 00:30:07.000 }, 00:30:07.000 "claimed": true, 00:30:07.000 "claim_type": "exclusive_write", 00:30:07.000 "zoned": false, 00:30:07.000 "supported_io_types": { 00:30:07.000 "read": true, 00:30:07.000 "write": true, 00:30:07.000 "unmap": true, 00:30:07.000 "write_zeroes": true, 00:30:07.000 "flush": true, 00:30:07.000 "reset": true, 00:30:07.000 "compare": false, 00:30:07.000 "compare_and_write": false, 00:30:07.000 "abort": true, 00:30:07.000 "nvme_admin": false, 00:30:07.000 "nvme_io": false 00:30:07.000 }, 00:30:07.000 "memory_domains": [ 00:30:07.000 { 00:30:07.000 "dma_device_id": "system", 00:30:07.000 "dma_device_type": 1 00:30:07.000 }, 00:30:07.000 { 00:30:07.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.000 "dma_device_type": 2 00:30:07.000 } 00:30:07.000 ], 00:30:07.000 "driver_specific": {} 00:30:07.000 } 00:30:07.000 ] 00:30:07.000 02:00:07 -- common/autotest_common.sh@893 -- # return 0 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.000 02:00:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.258 02:00:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:07.258 "name": "Existed_Raid", 00:30:07.258 "uuid": "e1c99775-cf89-477e-b862-4e5834576f37", 00:30:07.258 "strip_size_kb": 0, 00:30:07.258 "state": "configuring", 00:30:07.258 "raid_level": "raid1", 00:30:07.258 "superblock": true, 00:30:07.258 "num_base_bdevs": 4, 00:30:07.258 "num_base_bdevs_discovered": 2, 00:30:07.258 "num_base_bdevs_operational": 4, 00:30:07.258 "base_bdevs_list": [ 00:30:07.258 { 00:30:07.258 "name": "BaseBdev1", 00:30:07.258 "uuid": "5140850d-7fbd-4714-80dd-9957b113b96a", 00:30:07.258 "is_configured": true, 00:30:07.258 "data_offset": 2048, 00:30:07.258 "data_size": 63488 00:30:07.258 }, 00:30:07.258 { 00:30:07.258 "name": "BaseBdev2", 00:30:07.258 "uuid": "1e6865e9-9747-45a2-abcf-a9d3bf0300f6", 00:30:07.258 "is_configured": true, 00:30:07.258 "data_offset": 2048, 00:30:07.258 "data_size": 63488 00:30:07.258 }, 00:30:07.258 { 00:30:07.258 "name": "BaseBdev3", 00:30:07.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.258 "is_configured": false, 00:30:07.258 "data_offset": 0, 00:30:07.258 "data_size": 0 00:30:07.258 }, 00:30:07.258 { 00:30:07.258 "name": "BaseBdev4", 00:30:07.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.258 "is_configured": false, 00:30:07.258 "data_offset": 0, 00:30:07.258 "data_size": 0 00:30:07.258 } 00:30:07.258 ] 00:30:07.258 }' 00:30:07.258 02:00:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:07.258 02:00:07 -- common/autotest_common.sh@10 -- # set +x 00:30:08.281 02:00:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:08.281 [2024-04-24 02:00:08.333202] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:08.281 BaseBdev3 00:30:08.281 02:00:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:30:08.281 02:00:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:30:08.281 02:00:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:08.281 02:00:08 -- common/autotest_common.sh@887 -- # local i 00:30:08.281 02:00:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:08.281 02:00:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:08.281 02:00:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.552 02:00:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:08.816 [ 00:30:08.816 { 00:30:08.816 "name": "BaseBdev3", 00:30:08.816 "aliases": [ 00:30:08.816 "8a25cccd-a029-4687-8ddb-9a668b8df6d1" 00:30:08.816 ], 00:30:08.816 "product_name": "Malloc disk", 00:30:08.816 "block_size": 512, 00:30:08.816 "num_blocks": 65536, 00:30:08.817 "uuid": "8a25cccd-a029-4687-8ddb-9a668b8df6d1", 00:30:08.817 "assigned_rate_limits": { 00:30:08.817 "rw_ios_per_sec": 0, 00:30:08.817 "rw_mbytes_per_sec": 0, 00:30:08.817 "r_mbytes_per_sec": 0, 00:30:08.817 "w_mbytes_per_sec": 0 00:30:08.817 }, 00:30:08.817 "claimed": true, 00:30:08.817 "claim_type": "exclusive_write", 00:30:08.817 "zoned": false, 00:30:08.817 "supported_io_types": { 00:30:08.817 "read": true, 00:30:08.817 "write": true, 00:30:08.817 "unmap": true, 00:30:08.817 "write_zeroes": true, 00:30:08.817 "flush": true, 00:30:08.817 "reset": true, 00:30:08.817 "compare": false, 00:30:08.817 "compare_and_write": false, 00:30:08.817 "abort": true, 00:30:08.817 "nvme_admin": false, 00:30:08.817 "nvme_io": false 00:30:08.817 }, 00:30:08.817 "memory_domains": [ 00:30:08.817 { 00:30:08.817 "dma_device_id": "system", 00:30:08.817 "dma_device_type": 1 00:30:08.817 }, 00:30:08.817 { 00:30:08.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.817 "dma_device_type": 2 00:30:08.817 } 00:30:08.817 ], 00:30:08.817 "driver_specific": {} 00:30:08.817 } 00:30:08.817 ] 00:30:08.817 02:00:08 -- common/autotest_common.sh@893 -- # return 0 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:08.817 02:00:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.090 02:00:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:09.090 "name": "Existed_Raid", 00:30:09.090 "uuid": "e1c99775-cf89-477e-b862-4e5834576f37", 00:30:09.090 "strip_size_kb": 0, 00:30:09.090 "state": "configuring", 00:30:09.090 "raid_level": "raid1", 00:30:09.090 "superblock": true, 00:30:09.090 "num_base_bdevs": 4, 00:30:09.090 "num_base_bdevs_discovered": 3, 00:30:09.090 "num_base_bdevs_operational": 4, 00:30:09.090 "base_bdevs_list": [ 00:30:09.090 { 00:30:09.090 "name": "BaseBdev1", 00:30:09.090 "uuid": "5140850d-7fbd-4714-80dd-9957b113b96a", 00:30:09.090 "is_configured": true, 00:30:09.090 "data_offset": 2048, 00:30:09.090 "data_size": 63488 00:30:09.090 }, 00:30:09.090 { 00:30:09.090 "name": "BaseBdev2", 00:30:09.090 "uuid": "1e6865e9-9747-45a2-abcf-a9d3bf0300f6", 00:30:09.090 "is_configured": true, 00:30:09.090 "data_offset": 2048, 00:30:09.090 "data_size": 63488 00:30:09.090 }, 00:30:09.090 { 00:30:09.090 "name": "BaseBdev3", 00:30:09.090 "uuid": "8a25cccd-a029-4687-8ddb-9a668b8df6d1", 00:30:09.090 "is_configured": true, 00:30:09.090 "data_offset": 2048, 00:30:09.090 "data_size": 63488 00:30:09.090 }, 00:30:09.090 { 00:30:09.090 "name": "BaseBdev4", 00:30:09.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.090 "is_configured": false, 00:30:09.090 "data_offset": 0, 00:30:09.090 "data_size": 0 00:30:09.090 } 00:30:09.090 ] 00:30:09.090 }' 00:30:09.090 02:00:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:09.090 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:30:09.670 02:00:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:09.957 [2024-04-24 02:00:09.984093] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:09.957 [2024-04-24 02:00:09.984656] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:30:09.958 [2024-04-24 02:00:09.984803] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:09.958 [2024-04-24 02:00:09.985102] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:30:09.959 [2024-04-24 02:00:09.985633] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:30:09.959 [2024-04-24 02:00:09.985766] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:30:09.959 [2024-04-24 02:00:09.986126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.959 BaseBdev4 00:30:09.959 02:00:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:30:09.959 02:00:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:30:09.959 02:00:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:09.959 02:00:10 -- common/autotest_common.sh@887 -- # local i 00:30:09.959 02:00:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:09.959 02:00:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:09.959 02:00:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:10.248 02:00:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:10.900 [ 00:30:10.900 { 00:30:10.900 "name": "BaseBdev4", 00:30:10.900 "aliases": [ 00:30:10.900 "024fb1b4-7b6e-479e-b29e-0d6af96bf812" 00:30:10.900 ], 00:30:10.900 "product_name": "Malloc disk", 00:30:10.900 "block_size": 512, 00:30:10.900 "num_blocks": 65536, 00:30:10.900 "uuid": "024fb1b4-7b6e-479e-b29e-0d6af96bf812", 00:30:10.901 "assigned_rate_limits": { 00:30:10.901 "rw_ios_per_sec": 0, 00:30:10.901 "rw_mbytes_per_sec": 0, 00:30:10.901 "r_mbytes_per_sec": 0, 00:30:10.901 "w_mbytes_per_sec": 0 00:30:10.901 }, 00:30:10.901 "claimed": true, 00:30:10.901 "claim_type": "exclusive_write", 00:30:10.901 "zoned": false, 00:30:10.901 "supported_io_types": { 00:30:10.901 "read": true, 00:30:10.901 "write": true, 00:30:10.901 "unmap": true, 00:30:10.901 "write_zeroes": true, 00:30:10.901 "flush": true, 00:30:10.901 "reset": true, 00:30:10.901 "compare": false, 00:30:10.901 "compare_and_write": false, 00:30:10.901 "abort": true, 00:30:10.901 "nvme_admin": false, 00:30:10.901 "nvme_io": false 00:30:10.901 }, 00:30:10.901 "memory_domains": [ 00:30:10.901 { 00:30:10.901 "dma_device_id": "system", 00:30:10.901 "dma_device_type": 1 00:30:10.901 }, 00:30:10.901 { 00:30:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:10.901 "dma_device_type": 2 00:30:10.901 } 00:30:10.901 ], 00:30:10.901 "driver_specific": {} 00:30:10.901 } 00:30:10.901 ] 00:30:10.901 02:00:10 -- common/autotest_common.sh@893 -- # return 0 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:10.901 "name": "Existed_Raid", 00:30:10.901 "uuid": "e1c99775-cf89-477e-b862-4e5834576f37", 00:30:10.901 "strip_size_kb": 0, 00:30:10.901 "state": "online", 00:30:10.901 "raid_level": "raid1", 00:30:10.901 "superblock": true, 00:30:10.901 "num_base_bdevs": 4, 00:30:10.901 "num_base_bdevs_discovered": 4, 00:30:10.901 "num_base_bdevs_operational": 4, 00:30:10.901 "base_bdevs_list": [ 00:30:10.901 { 00:30:10.901 "name": "BaseBdev1", 00:30:10.901 "uuid": "5140850d-7fbd-4714-80dd-9957b113b96a", 00:30:10.901 "is_configured": true, 00:30:10.901 "data_offset": 2048, 00:30:10.901 "data_size": 63488 00:30:10.901 }, 00:30:10.901 { 00:30:10.901 "name": "BaseBdev2", 00:30:10.901 "uuid": "1e6865e9-9747-45a2-abcf-a9d3bf0300f6", 00:30:10.901 "is_configured": true, 00:30:10.901 "data_offset": 2048, 00:30:10.901 "data_size": 63488 00:30:10.901 }, 00:30:10.901 { 00:30:10.901 "name": "BaseBdev3", 00:30:10.901 "uuid": "8a25cccd-a029-4687-8ddb-9a668b8df6d1", 00:30:10.901 "is_configured": true, 00:30:10.901 "data_offset": 2048, 00:30:10.901 "data_size": 63488 00:30:10.901 }, 00:30:10.901 { 00:30:10.901 "name": "BaseBdev4", 00:30:10.901 "uuid": "024fb1b4-7b6e-479e-b29e-0d6af96bf812", 00:30:10.901 "is_configured": true, 00:30:10.901 "data_offset": 2048, 00:30:10.901 "data_size": 63488 00:30:10.901 } 00:30:10.901 ] 00:30:10.901 }' 00:30:10.901 02:00:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:10.901 02:00:10 -- common/autotest_common.sh@10 -- # set +x 00:30:11.513 02:00:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:11.791 [2024-04-24 02:00:11.664621] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:30:11.791 02:00:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.792 02:00:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.064 02:00:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:12.064 "name": "Existed_Raid", 00:30:12.064 "uuid": "e1c99775-cf89-477e-b862-4e5834576f37", 00:30:12.064 "strip_size_kb": 0, 00:30:12.064 "state": "online", 00:30:12.064 "raid_level": "raid1", 00:30:12.064 "superblock": true, 00:30:12.064 "num_base_bdevs": 4, 00:30:12.064 "num_base_bdevs_discovered": 3, 00:30:12.064 "num_base_bdevs_operational": 3, 00:30:12.064 "base_bdevs_list": [ 00:30:12.064 { 00:30:12.064 "name": null, 00:30:12.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.064 "is_configured": false, 00:30:12.064 "data_offset": 2048, 00:30:12.064 "data_size": 63488 00:30:12.064 }, 00:30:12.064 { 00:30:12.064 "name": "BaseBdev2", 00:30:12.064 "uuid": "1e6865e9-9747-45a2-abcf-a9d3bf0300f6", 00:30:12.064 "is_configured": true, 00:30:12.065 "data_offset": 2048, 00:30:12.065 "data_size": 63488 00:30:12.065 }, 00:30:12.065 { 00:30:12.065 "name": "BaseBdev3", 00:30:12.065 "uuid": "8a25cccd-a029-4687-8ddb-9a668b8df6d1", 00:30:12.065 "is_configured": true, 00:30:12.065 "data_offset": 2048, 00:30:12.065 "data_size": 63488 00:30:12.065 }, 00:30:12.065 { 00:30:12.065 "name": "BaseBdev4", 00:30:12.065 "uuid": "024fb1b4-7b6e-479e-b29e-0d6af96bf812", 00:30:12.065 "is_configured": true, 00:30:12.065 "data_offset": 2048, 00:30:12.065 "data_size": 63488 00:30:12.065 } 00:30:12.065 ] 00:30:12.065 }' 00:30:12.065 02:00:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:12.065 02:00:12 -- common/autotest_common.sh@10 -- # set +x 00:30:12.639 02:00:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:30:12.639 02:00:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:12.639 02:00:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.639 02:00:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:12.906 02:00:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:12.906 02:00:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:12.906 02:00:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:13.169 [2024-04-24 02:00:13.192961] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:13.429 02:00:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:13.429 02:00:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:13.429 02:00:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.429 02:00:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:13.714 02:00:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:13.714 02:00:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:13.714 02:00:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:13.969 [2024-04-24 02:00:13.805753] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:13.969 02:00:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:13.969 02:00:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:13.969 02:00:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:13.969 02:00:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.225 02:00:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:14.225 02:00:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:14.225 02:00:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:30:14.482 [2024-04-24 02:00:14.374838] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:14.482 [2024-04-24 02:00:14.375156] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:14.482 [2024-04-24 02:00:14.488356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:14.482 [2024-04-24 02:00:14.488731] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:14.482 [2024-04-24 02:00:14.488887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:30:14.482 02:00:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:14.482 02:00:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:14.482 02:00:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:30:14.482 02:00:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.739 02:00:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:30:14.739 02:00:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:30:14.739 02:00:14 -- bdev/bdev_raid.sh@287 -- # killprocess 129938 00:30:14.739 02:00:14 -- common/autotest_common.sh@936 -- # '[' -z 129938 ']' 00:30:14.739 02:00:14 -- common/autotest_common.sh@940 -- # kill -0 129938 00:30:14.739 02:00:14 -- common/autotest_common.sh@941 -- # uname 00:30:14.739 02:00:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:14.739 02:00:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129938 00:30:14.739 killing process with pid 129938 00:30:14.739 02:00:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:14.739 02:00:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:14.739 02:00:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129938' 00:30:14.739 02:00:14 -- common/autotest_common.sh@955 -- # kill 129938 00:30:14.739 [2024-04-24 02:00:14.819658] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:14.739 02:00:14 -- common/autotest_common.sh@960 -- # wait 129938 00:30:14.739 [2024-04-24 02:00:14.819804] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:16.636 ************************************ 00:30:16.636 END TEST raid_state_function_test_sb 00:30:16.636 ************************************ 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:30:16.636 00:30:16.636 real 0m17.348s 00:30:16.636 user 0m29.964s 00:30:16.636 sys 0m2.367s 00:30:16.636 02:00:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:16.636 02:00:16 -- common/autotest_common.sh@10 -- # set +x 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:30:16.636 02:00:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:30:16.636 02:00:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:16.636 02:00:16 -- common/autotest_common.sh@10 -- # set +x 00:30:16.636 ************************************ 00:30:16.636 START TEST raid_superblock_test 00:30:16.636 ************************************ 00:30:16.636 02:00:16 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 4 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@357 -- # raid_pid=130420 00:30:16.636 02:00:16 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130420 /var/tmp/spdk-raid.sock 00:30:16.636 02:00:16 -- common/autotest_common.sh@817 -- # '[' -z 130420 ']' 00:30:16.636 02:00:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:16.637 02:00:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:16.637 02:00:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:16.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:16.637 02:00:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:16.637 02:00:16 -- common/autotest_common.sh@10 -- # set +x 00:30:16.637 [2024-04-24 02:00:16.505363] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:30:16.637 [2024-04-24 02:00:16.505853] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130420 ] 00:30:16.637 [2024-04-24 02:00:16.673478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.895 [2024-04-24 02:00:16.915724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.162 [2024-04-24 02:00:17.203820] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.420 02:00:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:17.420 02:00:17 -- common/autotest_common.sh@850 -- # return 0 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:17.420 02:00:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:30:17.678 malloc1 00:30:17.678 02:00:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:17.939 [2024-04-24 02:00:17.875051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:17.939 [2024-04-24 02:00:17.875397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.939 [2024-04-24 02:00:17.875544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:30:17.939 [2024-04-24 02:00:17.875674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.939 [2024-04-24 02:00:17.878538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.939 [2024-04-24 02:00:17.878734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:17.939 pt1 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:17.939 02:00:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:30:18.206 malloc2 00:30:18.206 02:00:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:18.463 [2024-04-24 02:00:18.478899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:18.463 [2024-04-24 02:00:18.479246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.463 [2024-04-24 02:00:18.479421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:18.463 [2024-04-24 02:00:18.479557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.463 [2024-04-24 02:00:18.482224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.463 [2024-04-24 02:00:18.482415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:18.463 pt2 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.463 02:00:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:30:19.029 malloc3 00:30:19.029 02:00:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:19.029 [2024-04-24 02:00:19.026088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:19.029 [2024-04-24 02:00:19.026421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.029 [2024-04-24 02:00:19.026572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:30:19.029 [2024-04-24 02:00:19.026699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.029 [2024-04-24 02:00:19.029328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.029 [2024-04-24 02:00:19.029525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:19.029 pt3 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:19.029 02:00:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:30:19.286 malloc4 00:30:19.286 02:00:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:19.544 [2024-04-24 02:00:19.496813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:19.544 [2024-04-24 02:00:19.497817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.544 [2024-04-24 02:00:19.497972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:30:19.544 [2024-04-24 02:00:19.498250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.544 [2024-04-24 02:00:19.500946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.544 [2024-04-24 02:00:19.501174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:19.544 pt4 00:30:19.544 02:00:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:19.544 02:00:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:19.544 02:00:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:30:19.800 [2024-04-24 02:00:19.729594] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:19.800 [2024-04-24 02:00:19.732006] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:19.800 [2024-04-24 02:00:19.732125] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:19.800 [2024-04-24 02:00:19.732270] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:19.800 [2024-04-24 02:00:19.732621] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:30:19.800 [2024-04-24 02:00:19.732741] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:19.801 [2024-04-24 02:00:19.732952] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:30:19.801 [2024-04-24 02:00:19.733414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:30:19.801 [2024-04-24 02:00:19.733524] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:30:19.801 [2024-04-24 02:00:19.733822] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.801 02:00:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.058 02:00:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:20.058 "name": "raid_bdev1", 00:30:20.058 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:20.058 "strip_size_kb": 0, 00:30:20.058 "state": "online", 00:30:20.058 "raid_level": "raid1", 00:30:20.058 "superblock": true, 00:30:20.058 "num_base_bdevs": 4, 00:30:20.058 "num_base_bdevs_discovered": 4, 00:30:20.058 "num_base_bdevs_operational": 4, 00:30:20.058 "base_bdevs_list": [ 00:30:20.058 { 00:30:20.058 "name": "pt1", 00:30:20.058 "uuid": "06b8772f-5524-56dd-a0a6-62bf0733ddbc", 00:30:20.058 "is_configured": true, 00:30:20.058 "data_offset": 2048, 00:30:20.058 "data_size": 63488 00:30:20.058 }, 00:30:20.058 { 00:30:20.058 "name": "pt2", 00:30:20.058 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:20.058 "is_configured": true, 00:30:20.058 "data_offset": 2048, 00:30:20.058 "data_size": 63488 00:30:20.058 }, 00:30:20.058 { 00:30:20.058 "name": "pt3", 00:30:20.058 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:20.058 "is_configured": true, 00:30:20.058 "data_offset": 2048, 00:30:20.058 "data_size": 63488 00:30:20.058 }, 00:30:20.058 { 00:30:20.058 "name": "pt4", 00:30:20.058 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:20.058 "is_configured": true, 00:30:20.058 "data_offset": 2048, 00:30:20.058 "data_size": 63488 00:30:20.058 } 00:30:20.058 ] 00:30:20.058 }' 00:30:20.058 02:00:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:20.058 02:00:20 -- common/autotest_common.sh@10 -- # set +x 00:30:20.688 02:00:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:20.688 02:00:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:30:20.945 [2024-04-24 02:00:20.906387] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:20.945 02:00:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=06a38707-230f-45cd-9730-3517927c5ed0 00:30:20.945 02:00:20 -- bdev/bdev_raid.sh@380 -- # '[' -z 06a38707-230f-45cd-9730-3517927c5ed0 ']' 00:30:20.945 02:00:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:21.203 [2024-04-24 02:00:21.166100] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:21.203 [2024-04-24 02:00:21.166346] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:21.203 [2024-04-24 02:00:21.166557] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.203 [2024-04-24 02:00:21.166765] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.203 [2024-04-24 02:00:21.166862] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:30:21.203 02:00:21 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.203 02:00:21 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:30:21.461 02:00:21 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:30:21.461 02:00:21 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:30:21.461 02:00:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:21.461 02:00:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:21.718 02:00:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:21.718 02:00:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:21.976 02:00:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:21.976 02:00:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:30:22.233 02:00:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:22.233 02:00:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:30:22.490 02:00:22 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:22.490 02:00:22 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:22.765 02:00:22 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:30:22.765 02:00:22 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:22.765 02:00:22 -- common/autotest_common.sh@638 -- # local es=0 00:30:22.765 02:00:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:22.765 02:00:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:22.765 02:00:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:22.765 02:00:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:22.765 02:00:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:22.765 02:00:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:22.765 02:00:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:22.765 02:00:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:22.765 02:00:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:22.765 02:00:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:22.765 [2024-04-24 02:00:22.806443] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:22.765 [2024-04-24 02:00:22.808951] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:22.765 [2024-04-24 02:00:22.809182] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:22.765 [2024-04-24 02:00:22.809257] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:22.765 [2024-04-24 02:00:22.809417] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:30:22.765 [2024-04-24 02:00:22.809593] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:30:22.765 [2024-04-24 02:00:22.809723] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:30:22.765 [2024-04-24 02:00:22.809818] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:30:22.765 [2024-04-24 02:00:22.809916] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:22.765 [2024-04-24 02:00:22.810098] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:30:22.765 request: 00:30:22.765 { 00:30:22.765 "name": "raid_bdev1", 00:30:22.765 "raid_level": "raid1", 00:30:22.765 "base_bdevs": [ 00:30:22.765 "malloc1", 00:30:22.765 "malloc2", 00:30:22.765 "malloc3", 00:30:22.765 "malloc4" 00:30:22.765 ], 00:30:22.765 "superblock": false, 00:30:22.765 "method": "bdev_raid_create", 00:30:22.765 "req_id": 1 00:30:22.765 } 00:30:22.765 Got JSON-RPC error response 00:30:22.765 response: 00:30:22.765 { 00:30:22.765 "code": -17, 00:30:22.765 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:22.765 } 00:30:22.765 02:00:22 -- common/autotest_common.sh@641 -- # es=1 00:30:22.765 02:00:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:22.765 02:00:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:22.765 02:00:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:22.765 02:00:22 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.765 02:00:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:30:23.025 02:00:23 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:30:23.025 02:00:23 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:30:23.025 02:00:23 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:23.284 [2024-04-24 02:00:23.266402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:23.284 [2024-04-24 02:00:23.266688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.284 [2024-04-24 02:00:23.266762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:30:23.284 [2024-04-24 02:00:23.266875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.284 [2024-04-24 02:00:23.269341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.284 [2024-04-24 02:00:23.269561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:23.284 [2024-04-24 02:00:23.269780] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:30:23.284 [2024-04-24 02:00:23.269943] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:23.284 pt1 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.284 02:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.543 02:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:23.543 "name": "raid_bdev1", 00:30:23.543 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:23.543 "strip_size_kb": 0, 00:30:23.543 "state": "configuring", 00:30:23.543 "raid_level": "raid1", 00:30:23.543 "superblock": true, 00:30:23.543 "num_base_bdevs": 4, 00:30:23.543 "num_base_bdevs_discovered": 1, 00:30:23.543 "num_base_bdevs_operational": 4, 00:30:23.543 "base_bdevs_list": [ 00:30:23.543 { 00:30:23.543 "name": "pt1", 00:30:23.543 "uuid": "06b8772f-5524-56dd-a0a6-62bf0733ddbc", 00:30:23.543 "is_configured": true, 00:30:23.543 "data_offset": 2048, 00:30:23.543 "data_size": 63488 00:30:23.543 }, 00:30:23.543 { 00:30:23.543 "name": null, 00:30:23.543 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:23.543 "is_configured": false, 00:30:23.543 "data_offset": 2048, 00:30:23.543 "data_size": 63488 00:30:23.543 }, 00:30:23.543 { 00:30:23.543 "name": null, 00:30:23.543 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:23.543 "is_configured": false, 00:30:23.543 "data_offset": 2048, 00:30:23.543 "data_size": 63488 00:30:23.543 }, 00:30:23.543 { 00:30:23.543 "name": null, 00:30:23.543 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:23.543 "is_configured": false, 00:30:23.543 "data_offset": 2048, 00:30:23.543 "data_size": 63488 00:30:23.543 } 00:30:23.543 ] 00:30:23.543 }' 00:30:23.543 02:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:23.543 02:00:23 -- common/autotest_common.sh@10 -- # set +x 00:30:24.111 02:00:24 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:30:24.111 02:00:24 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:24.368 [2024-04-24 02:00:24.226633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:24.368 [2024-04-24 02:00:24.226936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.368 [2024-04-24 02:00:24.227071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:24.368 [2024-04-24 02:00:24.227177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.368 [2024-04-24 02:00:24.227723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.368 [2024-04-24 02:00:24.227895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:24.368 [2024-04-24 02:00:24.228130] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:30:24.368 [2024-04-24 02:00:24.228269] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:24.368 pt2 00:30:24.368 02:00:24 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:24.368 [2024-04-24 02:00:24.442738] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.634 02:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:24.634 "name": "raid_bdev1", 00:30:24.634 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:24.634 "strip_size_kb": 0, 00:30:24.634 "state": "configuring", 00:30:24.634 "raid_level": "raid1", 00:30:24.634 "superblock": true, 00:30:24.634 "num_base_bdevs": 4, 00:30:24.634 "num_base_bdevs_discovered": 1, 00:30:24.634 "num_base_bdevs_operational": 4, 00:30:24.634 "base_bdevs_list": [ 00:30:24.634 { 00:30:24.634 "name": "pt1", 00:30:24.634 "uuid": "06b8772f-5524-56dd-a0a6-62bf0733ddbc", 00:30:24.634 "is_configured": true, 00:30:24.634 "data_offset": 2048, 00:30:24.634 "data_size": 63488 00:30:24.634 }, 00:30:24.634 { 00:30:24.634 "name": null, 00:30:24.634 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:24.634 "is_configured": false, 00:30:24.634 "data_offset": 2048, 00:30:24.634 "data_size": 63488 00:30:24.634 }, 00:30:24.635 { 00:30:24.635 "name": null, 00:30:24.635 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:24.635 "is_configured": false, 00:30:24.635 "data_offset": 2048, 00:30:24.635 "data_size": 63488 00:30:24.635 }, 00:30:24.635 { 00:30:24.635 "name": null, 00:30:24.635 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:24.635 "is_configured": false, 00:30:24.635 "data_offset": 2048, 00:30:24.635 "data_size": 63488 00:30:24.635 } 00:30:24.635 ] 00:30:24.635 }' 00:30:24.635 02:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:24.635 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:25.567 [2024-04-24 02:00:25.510993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:25.567 [2024-04-24 02:00:25.511363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:25.567 [2024-04-24 02:00:25.511453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:25.567 [2024-04-24 02:00:25.511675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:25.567 [2024-04-24 02:00:25.512240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:25.567 [2024-04-24 02:00:25.512431] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:25.567 [2024-04-24 02:00:25.512661] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:30:25.567 [2024-04-24 02:00:25.512780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:25.567 pt2 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:25.567 02:00:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:25.824 [2024-04-24 02:00:25.758985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:25.824 [2024-04-24 02:00:25.759334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:25.824 [2024-04-24 02:00:25.759417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:25.824 [2024-04-24 02:00:25.759579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:25.824 [2024-04-24 02:00:25.760130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:25.824 [2024-04-24 02:00:25.760323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:25.824 [2024-04-24 02:00:25.760551] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:30:25.824 [2024-04-24 02:00:25.760672] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:25.824 pt3 00:30:25.824 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:30:25.824 02:00:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:25.824 02:00:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:26.082 [2024-04-24 02:00:26.015094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:26.082 [2024-04-24 02:00:26.015431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:26.082 [2024-04-24 02:00:26.015606] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:26.082 [2024-04-24 02:00:26.015724] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:26.082 [2024-04-24 02:00:26.016265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:26.082 [2024-04-24 02:00:26.016472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:26.082 [2024-04-24 02:00:26.016725] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:30:26.082 [2024-04-24 02:00:26.016844] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:26.082 [2024-04-24 02:00:26.017047] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:30:26.082 [2024-04-24 02:00:26.017145] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:26.082 [2024-04-24 02:00:26.017381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:26.082 [2024-04-24 02:00:26.017815] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:30:26.082 [2024-04-24 02:00:26.017934] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:30:26.082 [2024-04-24 02:00:26.018199] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.082 pt4 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.082 02:00:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.340 02:00:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:26.340 "name": "raid_bdev1", 00:30:26.340 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:26.340 "strip_size_kb": 0, 00:30:26.340 "state": "online", 00:30:26.340 "raid_level": "raid1", 00:30:26.340 "superblock": true, 00:30:26.340 "num_base_bdevs": 4, 00:30:26.340 "num_base_bdevs_discovered": 4, 00:30:26.340 "num_base_bdevs_operational": 4, 00:30:26.340 "base_bdevs_list": [ 00:30:26.340 { 00:30:26.340 "name": "pt1", 00:30:26.340 "uuid": "06b8772f-5524-56dd-a0a6-62bf0733ddbc", 00:30:26.340 "is_configured": true, 00:30:26.340 "data_offset": 2048, 00:30:26.340 "data_size": 63488 00:30:26.340 }, 00:30:26.340 { 00:30:26.340 "name": "pt2", 00:30:26.340 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:26.340 "is_configured": true, 00:30:26.340 "data_offset": 2048, 00:30:26.340 "data_size": 63488 00:30:26.340 }, 00:30:26.340 { 00:30:26.340 "name": "pt3", 00:30:26.340 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:26.340 "is_configured": true, 00:30:26.340 "data_offset": 2048, 00:30:26.340 "data_size": 63488 00:30:26.340 }, 00:30:26.340 { 00:30:26.340 "name": "pt4", 00:30:26.340 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:26.340 "is_configured": true, 00:30:26.340 "data_offset": 2048, 00:30:26.340 "data_size": 63488 00:30:26.340 } 00:30:26.340 ] 00:30:26.340 }' 00:30:26.340 02:00:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:26.340 02:00:26 -- common/autotest_common.sh@10 -- # set +x 00:30:26.908 02:00:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:30:26.908 02:00:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:27.178 [2024-04-24 02:00:27.027606] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:27.178 02:00:27 -- bdev/bdev_raid.sh@430 -- # '[' 06a38707-230f-45cd-9730-3517927c5ed0 '!=' 06a38707-230f-45cd-9730-3517927c5ed0 ']' 00:30:27.178 02:00:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:30:27.178 02:00:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:27.178 02:00:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:30:27.178 02:00:27 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:27.435 [2024-04-24 02:00:27.315426] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.435 02:00:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.694 02:00:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:27.694 "name": "raid_bdev1", 00:30:27.694 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:27.694 "strip_size_kb": 0, 00:30:27.694 "state": "online", 00:30:27.694 "raid_level": "raid1", 00:30:27.694 "superblock": true, 00:30:27.694 "num_base_bdevs": 4, 00:30:27.694 "num_base_bdevs_discovered": 3, 00:30:27.694 "num_base_bdevs_operational": 3, 00:30:27.694 "base_bdevs_list": [ 00:30:27.694 { 00:30:27.694 "name": null, 00:30:27.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.694 "is_configured": false, 00:30:27.694 "data_offset": 2048, 00:30:27.694 "data_size": 63488 00:30:27.694 }, 00:30:27.694 { 00:30:27.694 "name": "pt2", 00:30:27.694 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:27.694 "is_configured": true, 00:30:27.694 "data_offset": 2048, 00:30:27.694 "data_size": 63488 00:30:27.694 }, 00:30:27.694 { 00:30:27.694 "name": "pt3", 00:30:27.694 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:27.694 "is_configured": true, 00:30:27.694 "data_offset": 2048, 00:30:27.694 "data_size": 63488 00:30:27.694 }, 00:30:27.694 { 00:30:27.694 "name": "pt4", 00:30:27.694 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:27.694 "is_configured": true, 00:30:27.694 "data_offset": 2048, 00:30:27.694 "data_size": 63488 00:30:27.694 } 00:30:27.694 ] 00:30:27.694 }' 00:30:27.694 02:00:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:27.694 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:30:28.262 02:00:28 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:28.521 [2024-04-24 02:00:28.527676] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:28.521 [2024-04-24 02:00:28.527978] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:28.521 [2024-04-24 02:00:28.528178] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:28.521 [2024-04-24 02:00:28.528366] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:28.521 [2024-04-24 02:00:28.528455] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:30:28.521 02:00:28 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.521 02:00:28 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:30:28.779 02:00:28 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:30:28.779 02:00:28 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:30:28.779 02:00:28 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:30:28.779 02:00:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:30:28.779 02:00:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:29.037 02:00:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:30:29.037 02:00:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:30:29.037 02:00:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:30:29.311 02:00:29 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:30:29.311 02:00:29 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:30:29.311 02:00:29 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:30:29.577 02:00:29 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:30:29.577 02:00:29 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:30:29.577 02:00:29 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:30:29.577 02:00:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:30:29.577 02:00:29 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:29.835 [2024-04-24 02:00:29.699910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:29.835 [2024-04-24 02:00:29.700198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:29.835 [2024-04-24 02:00:29.700333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:29.835 [2024-04-24 02:00:29.700451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:29.835 [2024-04-24 02:00:29.703095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:29.835 [2024-04-24 02:00:29.703341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:29.835 [2024-04-24 02:00:29.703597] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:30:29.835 [2024-04-24 02:00:29.703724] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:29.835 pt2 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.835 02:00:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.182 02:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:30.182 "name": "raid_bdev1", 00:30:30.182 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:30.182 "strip_size_kb": 0, 00:30:30.182 "state": "configuring", 00:30:30.182 "raid_level": "raid1", 00:30:30.182 "superblock": true, 00:30:30.182 "num_base_bdevs": 4, 00:30:30.182 "num_base_bdevs_discovered": 1, 00:30:30.182 "num_base_bdevs_operational": 3, 00:30:30.182 "base_bdevs_list": [ 00:30:30.182 { 00:30:30.182 "name": null, 00:30:30.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.182 "is_configured": false, 00:30:30.182 "data_offset": 2048, 00:30:30.182 "data_size": 63488 00:30:30.182 }, 00:30:30.182 { 00:30:30.182 "name": "pt2", 00:30:30.182 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:30.182 "is_configured": true, 00:30:30.182 "data_offset": 2048, 00:30:30.182 "data_size": 63488 00:30:30.182 }, 00:30:30.182 { 00:30:30.182 "name": null, 00:30:30.182 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:30.182 "is_configured": false, 00:30:30.182 "data_offset": 2048, 00:30:30.182 "data_size": 63488 00:30:30.182 }, 00:30:30.182 { 00:30:30.182 "name": null, 00:30:30.182 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:30.182 "is_configured": false, 00:30:30.182 "data_offset": 2048, 00:30:30.182 "data_size": 63488 00:30:30.182 } 00:30:30.182 ] 00:30:30.182 }' 00:30:30.182 02:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:30.182 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:30:30.767 02:00:30 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:30:30.767 02:00:30 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:30:30.767 02:00:30 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:30.767 [2024-04-24 02:00:30.840311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:30.767 [2024-04-24 02:00:30.840593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:30.767 [2024-04-24 02:00:30.840733] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:30.767 [2024-04-24 02:00:30.840831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:30.767 [2024-04-24 02:00:30.841367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:30.767 [2024-04-24 02:00:30.841547] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:30.767 [2024-04-24 02:00:30.841748] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:30:30.767 [2024-04-24 02:00:30.841851] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:30.767 pt3 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.025 02:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.025 02:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:31.025 "name": "raid_bdev1", 00:30:31.025 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:31.025 "strip_size_kb": 0, 00:30:31.025 "state": "configuring", 00:30:31.025 "raid_level": "raid1", 00:30:31.025 "superblock": true, 00:30:31.025 "num_base_bdevs": 4, 00:30:31.025 "num_base_bdevs_discovered": 2, 00:30:31.025 "num_base_bdevs_operational": 3, 00:30:31.025 "base_bdevs_list": [ 00:30:31.025 { 00:30:31.025 "name": null, 00:30:31.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.025 "is_configured": false, 00:30:31.025 "data_offset": 2048, 00:30:31.025 "data_size": 63488 00:30:31.025 }, 00:30:31.025 { 00:30:31.025 "name": "pt2", 00:30:31.025 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:31.025 "is_configured": true, 00:30:31.025 "data_offset": 2048, 00:30:31.025 "data_size": 63488 00:30:31.025 }, 00:30:31.025 { 00:30:31.025 "name": "pt3", 00:30:31.025 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:31.025 "is_configured": true, 00:30:31.025 "data_offset": 2048, 00:30:31.026 "data_size": 63488 00:30:31.026 }, 00:30:31.026 { 00:30:31.026 "name": null, 00:30:31.026 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:31.026 "is_configured": false, 00:30:31.026 "data_offset": 2048, 00:30:31.026 "data_size": 63488 00:30:31.026 } 00:30:31.026 ] 00:30:31.026 }' 00:30:31.026 02:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:31.026 02:00:31 -- common/autotest_common.sh@10 -- # set +x 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@462 -- # i=3 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:31.960 [2024-04-24 02:00:31.948583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:31.960 [2024-04-24 02:00:31.948882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:31.960 [2024-04-24 02:00:31.949046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:31.960 [2024-04-24 02:00:31.949147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:31.960 [2024-04-24 02:00:31.949767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:31.960 [2024-04-24 02:00:31.949940] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:31.960 [2024-04-24 02:00:31.950186] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:30:31.960 [2024-04-24 02:00:31.950310] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:31.960 [2024-04-24 02:00:31.950554] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:30:31.960 [2024-04-24 02:00:31.950664] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:31.960 [2024-04-24 02:00:31.950887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:31.960 [2024-04-24 02:00:31.951361] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:30:31.960 [2024-04-24 02:00:31.951479] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:30:31.960 [2024-04-24 02:00:31.951728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.960 pt4 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:31.960 02:00:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:31.961 02:00:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:31.961 02:00:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.961 02:00:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.218 02:00:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:32.218 "name": "raid_bdev1", 00:30:32.219 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:32.219 "strip_size_kb": 0, 00:30:32.219 "state": "online", 00:30:32.219 "raid_level": "raid1", 00:30:32.219 "superblock": true, 00:30:32.219 "num_base_bdevs": 4, 00:30:32.219 "num_base_bdevs_discovered": 3, 00:30:32.219 "num_base_bdevs_operational": 3, 00:30:32.219 "base_bdevs_list": [ 00:30:32.219 { 00:30:32.219 "name": null, 00:30:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.219 "is_configured": false, 00:30:32.219 "data_offset": 2048, 00:30:32.219 "data_size": 63488 00:30:32.219 }, 00:30:32.219 { 00:30:32.219 "name": "pt2", 00:30:32.219 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:32.219 "is_configured": true, 00:30:32.219 "data_offset": 2048, 00:30:32.219 "data_size": 63488 00:30:32.219 }, 00:30:32.219 { 00:30:32.219 "name": "pt3", 00:30:32.219 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:32.219 "is_configured": true, 00:30:32.219 "data_offset": 2048, 00:30:32.219 "data_size": 63488 00:30:32.219 }, 00:30:32.219 { 00:30:32.219 "name": "pt4", 00:30:32.219 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:32.219 "is_configured": true, 00:30:32.219 "data_offset": 2048, 00:30:32.219 "data_size": 63488 00:30:32.219 } 00:30:32.219 ] 00:30:32.219 }' 00:30:32.219 02:00:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:32.219 02:00:32 -- common/autotest_common.sh@10 -- # set +x 00:30:32.784 02:00:32 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:30:32.784 02:00:32 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:33.042 [2024-04-24 02:00:33.004826] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:33.042 [2024-04-24 02:00:33.005071] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:33.042 [2024-04-24 02:00:33.005261] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:33.042 [2024-04-24 02:00:33.005373] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:33.042 [2024-04-24 02:00:33.005586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:30:33.042 02:00:33 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.042 02:00:33 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:30:33.299 02:00:33 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:30:33.299 02:00:33 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:30:33.299 02:00:33 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:33.556 [2024-04-24 02:00:33.616906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:33.556 [2024-04-24 02:00:33.618485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.556 [2024-04-24 02:00:33.618576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:33.556 [2024-04-24 02:00:33.618730] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.556 [2024-04-24 02:00:33.621384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.556 [2024-04-24 02:00:33.621588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:33.556 [2024-04-24 02:00:33.621836] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:30:33.556 [2024-04-24 02:00:33.621961] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:33.556 pt1 00:30:33.556 02:00:33 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:30:33.556 02:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:33.815 "name": "raid_bdev1", 00:30:33.815 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:33.815 "strip_size_kb": 0, 00:30:33.815 "state": "configuring", 00:30:33.815 "raid_level": "raid1", 00:30:33.815 "superblock": true, 00:30:33.815 "num_base_bdevs": 4, 00:30:33.815 "num_base_bdevs_discovered": 1, 00:30:33.815 "num_base_bdevs_operational": 4, 00:30:33.815 "base_bdevs_list": [ 00:30:33.815 { 00:30:33.815 "name": "pt1", 00:30:33.815 "uuid": "06b8772f-5524-56dd-a0a6-62bf0733ddbc", 00:30:33.815 "is_configured": true, 00:30:33.815 "data_offset": 2048, 00:30:33.815 "data_size": 63488 00:30:33.815 }, 00:30:33.815 { 00:30:33.815 "name": null, 00:30:33.815 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:33.815 "is_configured": false, 00:30:33.815 "data_offset": 2048, 00:30:33.815 "data_size": 63488 00:30:33.815 }, 00:30:33.815 { 00:30:33.815 "name": null, 00:30:33.815 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:33.815 "is_configured": false, 00:30:33.815 "data_offset": 2048, 00:30:33.815 "data_size": 63488 00:30:33.815 }, 00:30:33.815 { 00:30:33.815 "name": null, 00:30:33.815 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:33.815 "is_configured": false, 00:30:33.815 "data_offset": 2048, 00:30:33.815 "data_size": 63488 00:30:33.815 } 00:30:33.815 ] 00:30:33.815 }' 00:30:33.815 02:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:33.815 02:00:33 -- common/autotest_common.sh@10 -- # set +x 00:30:34.389 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:30:34.389 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:30:34.389 02:00:34 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:30:34.956 02:00:34 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:30:35.215 02:00:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:30:35.215 02:00:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:30:35.215 02:00:35 -- bdev/bdev_raid.sh@489 -- # i=3 00:30:35.215 02:00:35 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:35.473 [2024-04-24 02:00:35.394395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:35.473 [2024-04-24 02:00:35.394904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.473 [2024-04-24 02:00:35.395120] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:35.473 [2024-04-24 02:00:35.395324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.473 [2024-04-24 02:00:35.396029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.473 [2024-04-24 02:00:35.396302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:35.473 [2024-04-24 02:00:35.396622] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:30:35.473 [2024-04-24 02:00:35.396807] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:35.473 [2024-04-24 02:00:35.396928] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:35.473 [2024-04-24 02:00:35.397094] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:30:35.473 [2024-04-24 02:00:35.397405] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:35.473 pt4 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.473 02:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.732 02:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:35.732 "name": "raid_bdev1", 00:30:35.732 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:35.732 "strip_size_kb": 0, 00:30:35.732 "state": "configuring", 00:30:35.732 "raid_level": "raid1", 00:30:35.732 "superblock": true, 00:30:35.732 "num_base_bdevs": 4, 00:30:35.732 "num_base_bdevs_discovered": 1, 00:30:35.732 "num_base_bdevs_operational": 3, 00:30:35.732 "base_bdevs_list": [ 00:30:35.732 { 00:30:35.732 "name": null, 00:30:35.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.732 "is_configured": false, 00:30:35.732 "data_offset": 2048, 00:30:35.732 "data_size": 63488 00:30:35.732 }, 00:30:35.732 { 00:30:35.732 "name": null, 00:30:35.732 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:35.732 "is_configured": false, 00:30:35.732 "data_offset": 2048, 00:30:35.732 "data_size": 63488 00:30:35.732 }, 00:30:35.732 { 00:30:35.732 "name": null, 00:30:35.732 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:35.732 "is_configured": false, 00:30:35.732 "data_offset": 2048, 00:30:35.732 "data_size": 63488 00:30:35.732 }, 00:30:35.732 { 00:30:35.732 "name": "pt4", 00:30:35.732 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:35.732 "is_configured": true, 00:30:35.732 "data_offset": 2048, 00:30:35.732 "data_size": 63488 00:30:35.732 } 00:30:35.732 ] 00:30:35.732 }' 00:30:35.732 02:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:35.732 02:00:35 -- common/autotest_common.sh@10 -- # set +x 00:30:36.311 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:30:36.311 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:30:36.311 02:00:36 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:36.568 [2024-04-24 02:00:36.398622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:36.568 [2024-04-24 02:00:36.399095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.568 [2024-04-24 02:00:36.399314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:30:36.568 [2024-04-24 02:00:36.399506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.568 [2024-04-24 02:00:36.400195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.568 [2024-04-24 02:00:36.400453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:36.568 [2024-04-24 02:00:36.400768] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:30:36.568 [2024-04-24 02:00:36.400981] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:36.568 pt2 00:30:36.568 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:30:36.568 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:30:36.568 02:00:36 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:36.568 [2024-04-24 02:00:36.638659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:36.568 [2024-04-24 02:00:36.639097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.568 [2024-04-24 02:00:36.639319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:36.568 [2024-04-24 02:00:36.639511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.568 [2024-04-24 02:00:36.640192] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.568 [2024-04-24 02:00:36.640448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:36.568 [2024-04-24 02:00:36.640762] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:30:36.568 [2024-04-24 02:00:36.640978] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:36.568 [2024-04-24 02:00:36.641313] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:30:36.568 [2024-04-24 02:00:36.641489] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:36.568 [2024-04-24 02:00:36.641761] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:36.568 [2024-04-24 02:00:36.642303] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:30:36.568 [2024-04-24 02:00:36.642488] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:30:36.568 [2024-04-24 02:00:36.642857] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.568 pt3 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:36.826 "name": "raid_bdev1", 00:30:36.826 "uuid": "06a38707-230f-45cd-9730-3517927c5ed0", 00:30:36.826 "strip_size_kb": 0, 00:30:36.826 "state": "online", 00:30:36.826 "raid_level": "raid1", 00:30:36.826 "superblock": true, 00:30:36.826 "num_base_bdevs": 4, 00:30:36.826 "num_base_bdevs_discovered": 3, 00:30:36.826 "num_base_bdevs_operational": 3, 00:30:36.826 "base_bdevs_list": [ 00:30:36.826 { 00:30:36.826 "name": null, 00:30:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.826 "is_configured": false, 00:30:36.826 "data_offset": 2048, 00:30:36.826 "data_size": 63488 00:30:36.826 }, 00:30:36.826 { 00:30:36.826 "name": "pt2", 00:30:36.826 "uuid": "02d10480-75e7-50d7-9f9d-b1ed86d97f42", 00:30:36.826 "is_configured": true, 00:30:36.826 "data_offset": 2048, 00:30:36.826 "data_size": 63488 00:30:36.826 }, 00:30:36.826 { 00:30:36.826 "name": "pt3", 00:30:36.826 "uuid": "98a6bdc0-acd1-5116-a795-0046d8811ec3", 00:30:36.826 "is_configured": true, 00:30:36.826 "data_offset": 2048, 00:30:36.826 "data_size": 63488 00:30:36.826 }, 00:30:36.826 { 00:30:36.826 "name": "pt4", 00:30:36.826 "uuid": "ca31c285-8f5e-54f6-a8e4-eee1add8c227", 00:30:36.826 "is_configured": true, 00:30:36.826 "data_offset": 2048, 00:30:36.826 "data_size": 63488 00:30:36.826 } 00:30:36.826 ] 00:30:36.826 }' 00:30:36.826 02:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:36.826 02:00:36 -- common/autotest_common.sh@10 -- # set +x 00:30:37.759 02:00:37 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:37.759 02:00:37 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:30:37.759 [2024-04-24 02:00:37.751551] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:37.759 02:00:37 -- bdev/bdev_raid.sh@506 -- # '[' 06a38707-230f-45cd-9730-3517927c5ed0 '!=' 06a38707-230f-45cd-9730-3517927c5ed0 ']' 00:30:37.759 02:00:37 -- bdev/bdev_raid.sh@511 -- # killprocess 130420 00:30:37.759 02:00:37 -- common/autotest_common.sh@936 -- # '[' -z 130420 ']' 00:30:37.759 02:00:37 -- common/autotest_common.sh@940 -- # kill -0 130420 00:30:37.759 02:00:37 -- common/autotest_common.sh@941 -- # uname 00:30:37.759 02:00:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:37.759 02:00:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130420 00:30:37.759 02:00:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:37.759 02:00:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:37.759 02:00:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130420' 00:30:37.759 killing process with pid 130420 00:30:37.759 02:00:37 -- common/autotest_common.sh@955 -- # kill 130420 00:30:37.759 02:00:37 -- common/autotest_common.sh@960 -- # wait 130420 00:30:37.759 [2024-04-24 02:00:37.800919] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:37.759 [2024-04-24 02:00:37.801192] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:37.759 [2024-04-24 02:00:37.801486] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:37.759 [2024-04-24 02:00:37.801660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:30:38.342 [2024-04-24 02:00:38.267061] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:39.763 02:00:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:30:39.763 00:30:39.763 real 0m23.317s 00:30:39.763 user 0m41.452s 00:30:39.764 sys 0m3.300s 00:30:39.764 02:00:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:39.764 02:00:39 -- common/autotest_common.sh@10 -- # set +x 00:30:39.764 ************************************ 00:30:39.764 END TEST raid_superblock_test 00:30:39.764 ************************************ 00:30:39.764 02:00:39 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:30:39.764 02:00:39 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:30:39.764 02:00:39 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:30:39.764 02:00:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:30:39.764 02:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:39.764 02:00:39 -- common/autotest_common.sh@10 -- # set +x 00:30:40.021 ************************************ 00:30:40.021 START TEST raid_rebuild_test 00:30:40.021 ************************************ 00:30:40.021 02:00:39 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false false 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:40.021 02:00:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=131111 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131111 /var/tmp/spdk-raid.sock 00:30:40.022 02:00:39 -- common/autotest_common.sh@817 -- # '[' -z 131111 ']' 00:30:40.022 02:00:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:40.022 02:00:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:40.022 02:00:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:40.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:40.022 02:00:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:40.022 02:00:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:40.022 02:00:39 -- common/autotest_common.sh@10 -- # set +x 00:30:40.022 [2024-04-24 02:00:39.946744] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:30:40.022 [2024-04-24 02:00:39.947142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131111 ] 00:30:40.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:40.022 Zero copy mechanism will not be used. 00:30:40.281 [2024-04-24 02:00:40.129793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.539 [2024-04-24 02:00:40.417602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.797 [2024-04-24 02:00:40.670249] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:40.797 02:00:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:40.797 02:00:40 -- common/autotest_common.sh@850 -- # return 0 00:30:40.797 02:00:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:30:40.797 02:00:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:30:40.797 02:00:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:41.363 BaseBdev1 00:30:41.363 02:00:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:30:41.363 02:00:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:30:41.363 02:00:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:41.620 BaseBdev2 00:30:41.620 02:00:41 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:41.877 spare_malloc 00:30:41.877 02:00:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:42.136 spare_delay 00:30:42.136 02:00:42 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:42.393 [2024-04-24 02:00:42.254251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:42.393 [2024-04-24 02:00:42.254579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:42.393 [2024-04-24 02:00:42.254744] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:42.393 [2024-04-24 02:00:42.254902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:42.393 [2024-04-24 02:00:42.258154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:42.393 [2024-04-24 02:00:42.258358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:42.393 spare 00:30:42.393 02:00:42 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:42.393 [2024-04-24 02:00:42.462817] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:42.393 [2024-04-24 02:00:42.465178] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:42.393 [2024-04-24 02:00:42.465444] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:30:42.393 [2024-04-24 02:00:42.465558] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:42.393 [2024-04-24 02:00:42.465770] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:30:42.393 [2024-04-24 02:00:42.466275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:30:42.393 [2024-04-24 02:00:42.466399] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:30:42.393 [2024-04-24 02:00:42.466723] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.651 02:00:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.909 02:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:42.909 "name": "raid_bdev1", 00:30:42.909 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:42.909 "strip_size_kb": 0, 00:30:42.909 "state": "online", 00:30:42.909 "raid_level": "raid1", 00:30:42.909 "superblock": false, 00:30:42.909 "num_base_bdevs": 2, 00:30:42.909 "num_base_bdevs_discovered": 2, 00:30:42.909 "num_base_bdevs_operational": 2, 00:30:42.909 "base_bdevs_list": [ 00:30:42.909 { 00:30:42.909 "name": "BaseBdev1", 00:30:42.909 "uuid": "47fed6cf-eb50-4198-b1bb-002ec6889bd7", 00:30:42.909 "is_configured": true, 00:30:42.909 "data_offset": 0, 00:30:42.909 "data_size": 65536 00:30:42.909 }, 00:30:42.909 { 00:30:42.909 "name": "BaseBdev2", 00:30:42.909 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:42.909 "is_configured": true, 00:30:42.909 "data_offset": 0, 00:30:42.909 "data_size": 65536 00:30:42.909 } 00:30:42.909 ] 00:30:42.909 }' 00:30:42.909 02:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:42.909 02:00:42 -- common/autotest_common.sh@10 -- # set +x 00:30:43.475 02:00:43 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:30:43.475 02:00:43 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:43.734 [2024-04-24 02:00:43.723355] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:43.734 02:00:43 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:30:43.734 02:00:43 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.734 02:00:43 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:43.991 02:00:44 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:30:43.991 02:00:44 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:30:43.991 02:00:44 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:30:43.991 02:00:44 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@12 -- # local i 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:43.991 02:00:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:44.556 [2024-04-24 02:00:44.363315] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:30:44.556 /dev/nbd0 00:30:44.556 02:00:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:44.556 02:00:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:44.556 02:00:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:30:44.556 02:00:44 -- common/autotest_common.sh@855 -- # local i 00:30:44.556 02:00:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:30:44.556 02:00:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:30:44.556 02:00:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:30:44.556 02:00:44 -- common/autotest_common.sh@859 -- # break 00:30:44.556 02:00:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:44.556 02:00:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:44.556 02:00:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:44.556 1+0 records in 00:30:44.556 1+0 records out 00:30:44.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842253 s, 4.9 MB/s 00:30:44.556 02:00:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:44.556 02:00:44 -- common/autotest_common.sh@872 -- # size=4096 00:30:44.556 02:00:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:44.556 02:00:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:30:44.556 02:00:44 -- common/autotest_common.sh@875 -- # return 0 00:30:44.556 02:00:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:44.556 02:00:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:44.556 02:00:44 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:30:44.556 02:00:44 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:30:44.556 02:00:44 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:30:49.860 65536+0 records in 00:30:49.860 65536+0 records out 00:30:49.860 33554432 bytes (34 MB, 32 MiB) copied, 5.04692 s, 6.6 MB/s 00:30:49.860 02:00:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@51 -- # local i 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:49.860 [2024-04-24 02:00:49.764918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@41 -- # break 00:30:49.860 02:00:49 -- bdev/nbd_common.sh@45 -- # return 0 00:30:49.860 02:00:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:50.119 [2024-04-24 02:00:50.052654] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.119 02:00:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.377 02:00:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:50.377 "name": "raid_bdev1", 00:30:50.377 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:50.377 "strip_size_kb": 0, 00:30:50.377 "state": "online", 00:30:50.377 "raid_level": "raid1", 00:30:50.377 "superblock": false, 00:30:50.377 "num_base_bdevs": 2, 00:30:50.377 "num_base_bdevs_discovered": 1, 00:30:50.377 "num_base_bdevs_operational": 1, 00:30:50.377 "base_bdevs_list": [ 00:30:50.377 { 00:30:50.377 "name": null, 00:30:50.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.377 "is_configured": false, 00:30:50.377 "data_offset": 0, 00:30:50.377 "data_size": 65536 00:30:50.377 }, 00:30:50.377 { 00:30:50.377 "name": "BaseBdev2", 00:30:50.377 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:50.377 "is_configured": true, 00:30:50.377 "data_offset": 0, 00:30:50.377 "data_size": 65536 00:30:50.377 } 00:30:50.377 ] 00:30:50.377 }' 00:30:50.377 02:00:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:50.378 02:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:51.311 02:00:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:51.311 [2024-04-24 02:00:51.288886] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:30:51.311 [2024-04-24 02:00:51.289261] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:51.311 [2024-04-24 02:00:51.307010] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:30:51.311 [2024-04-24 02:00:51.309890] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:51.311 02:00:51 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:52.686 "name": "raid_bdev1", 00:30:52.686 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:52.686 "strip_size_kb": 0, 00:30:52.686 "state": "online", 00:30:52.686 "raid_level": "raid1", 00:30:52.686 "superblock": false, 00:30:52.686 "num_base_bdevs": 2, 00:30:52.686 "num_base_bdevs_discovered": 2, 00:30:52.686 "num_base_bdevs_operational": 2, 00:30:52.686 "process": { 00:30:52.686 "type": "rebuild", 00:30:52.686 "target": "spare", 00:30:52.686 "progress": { 00:30:52.686 "blocks": 24576, 00:30:52.686 "percent": 37 00:30:52.686 } 00:30:52.686 }, 00:30:52.686 "base_bdevs_list": [ 00:30:52.686 { 00:30:52.686 "name": "spare", 00:30:52.686 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:52.686 "is_configured": true, 00:30:52.686 "data_offset": 0, 00:30:52.686 "data_size": 65536 00:30:52.686 }, 00:30:52.686 { 00:30:52.686 "name": "BaseBdev2", 00:30:52.686 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:52.686 "is_configured": true, 00:30:52.686 "data_offset": 0, 00:30:52.686 "data_size": 65536 00:30:52.686 } 00:30:52.686 ] 00:30:52.686 }' 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.686 02:00:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:52.947 [2024-04-24 02:00:53.003851] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:52.947 [2024-04-24 02:00:53.021550] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:52.947 [2024-04-24 02:00:53.021814] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.210 02:00:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.468 02:00:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:53.468 "name": "raid_bdev1", 00:30:53.469 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:53.469 "strip_size_kb": 0, 00:30:53.469 "state": "online", 00:30:53.469 "raid_level": "raid1", 00:30:53.469 "superblock": false, 00:30:53.469 "num_base_bdevs": 2, 00:30:53.469 "num_base_bdevs_discovered": 1, 00:30:53.469 "num_base_bdevs_operational": 1, 00:30:53.469 "base_bdevs_list": [ 00:30:53.469 { 00:30:53.469 "name": null, 00:30:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.469 "is_configured": false, 00:30:53.469 "data_offset": 0, 00:30:53.469 "data_size": 65536 00:30:53.469 }, 00:30:53.469 { 00:30:53.469 "name": "BaseBdev2", 00:30:53.469 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:53.469 "is_configured": true, 00:30:53.469 "data_offset": 0, 00:30:53.469 "data_size": 65536 00:30:53.469 } 00:30:53.469 ] 00:30:53.469 }' 00:30:53.469 02:00:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:53.469 02:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.033 02:00:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.291 02:00:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:54.291 "name": "raid_bdev1", 00:30:54.291 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:54.291 "strip_size_kb": 0, 00:30:54.291 "state": "online", 00:30:54.291 "raid_level": "raid1", 00:30:54.291 "superblock": false, 00:30:54.291 "num_base_bdevs": 2, 00:30:54.291 "num_base_bdevs_discovered": 1, 00:30:54.291 "num_base_bdevs_operational": 1, 00:30:54.291 "base_bdevs_list": [ 00:30:54.291 { 00:30:54.291 "name": null, 00:30:54.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.291 "is_configured": false, 00:30:54.291 "data_offset": 0, 00:30:54.291 "data_size": 65536 00:30:54.291 }, 00:30:54.291 { 00:30:54.291 "name": "BaseBdev2", 00:30:54.291 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:54.291 "is_configured": true, 00:30:54.291 "data_offset": 0, 00:30:54.291 "data_size": 65536 00:30:54.291 } 00:30:54.291 ] 00:30:54.291 }' 00:30:54.291 02:00:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:54.291 02:00:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:54.291 02:00:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:54.548 02:00:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:30:54.548 02:00:54 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:54.805 [2024-04-24 02:00:54.656825] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:30:54.805 [2024-04-24 02:00:54.657172] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:54.805 [2024-04-24 02:00:54.674151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:30:54.805 [2024-04-24 02:00:54.676959] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:54.806 02:00:54 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.740 02:00:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.998 02:00:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:55.998 "name": "raid_bdev1", 00:30:55.998 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:55.998 "strip_size_kb": 0, 00:30:55.998 "state": "online", 00:30:55.998 "raid_level": "raid1", 00:30:55.998 "superblock": false, 00:30:55.998 "num_base_bdevs": 2, 00:30:55.998 "num_base_bdevs_discovered": 2, 00:30:55.998 "num_base_bdevs_operational": 2, 00:30:55.998 "process": { 00:30:55.998 "type": "rebuild", 00:30:55.998 "target": "spare", 00:30:55.998 "progress": { 00:30:55.998 "blocks": 26624, 00:30:55.998 "percent": 40 00:30:55.998 } 00:30:55.998 }, 00:30:55.998 "base_bdevs_list": [ 00:30:55.998 { 00:30:55.998 "name": "spare", 00:30:55.998 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:55.998 "is_configured": true, 00:30:55.998 "data_offset": 0, 00:30:55.998 "data_size": 65536 00:30:55.998 }, 00:30:55.998 { 00:30:55.998 "name": "BaseBdev2", 00:30:55.998 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:55.998 "is_configured": true, 00:30:55.998 "data_offset": 0, 00:30:55.998 "data_size": 65536 00:30:55.998 } 00:30:55.998 ] 00:30:55.998 }' 00:30:55.998 02:00:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@657 -- # local timeout=431 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.287 02:00:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:56.568 "name": "raid_bdev1", 00:30:56.568 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:56.568 "strip_size_kb": 0, 00:30:56.568 "state": "online", 00:30:56.568 "raid_level": "raid1", 00:30:56.568 "superblock": false, 00:30:56.568 "num_base_bdevs": 2, 00:30:56.568 "num_base_bdevs_discovered": 2, 00:30:56.568 "num_base_bdevs_operational": 2, 00:30:56.568 "process": { 00:30:56.568 "type": "rebuild", 00:30:56.568 "target": "spare", 00:30:56.568 "progress": { 00:30:56.568 "blocks": 34816, 00:30:56.568 "percent": 53 00:30:56.568 } 00:30:56.568 }, 00:30:56.568 "base_bdevs_list": [ 00:30:56.568 { 00:30:56.568 "name": "spare", 00:30:56.568 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:56.568 "is_configured": true, 00:30:56.568 "data_offset": 0, 00:30:56.568 "data_size": 65536 00:30:56.568 }, 00:30:56.568 { 00:30:56.568 "name": "BaseBdev2", 00:30:56.568 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:56.568 "is_configured": true, 00:30:56.568 "data_offset": 0, 00:30:56.568 "data_size": 65536 00:30:56.568 } 00:30:56.568 ] 00:30:56.568 }' 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:30:56.568 02:00:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.504 02:00:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.071 02:00:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:58.071 "name": "raid_bdev1", 00:30:58.071 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:58.071 "strip_size_kb": 0, 00:30:58.071 "state": "online", 00:30:58.071 "raid_level": "raid1", 00:30:58.071 "superblock": false, 00:30:58.071 "num_base_bdevs": 2, 00:30:58.071 "num_base_bdevs_discovered": 2, 00:30:58.071 "num_base_bdevs_operational": 2, 00:30:58.071 "process": { 00:30:58.071 "type": "rebuild", 00:30:58.071 "target": "spare", 00:30:58.071 "progress": { 00:30:58.071 "blocks": 63488, 00:30:58.071 "percent": 96 00:30:58.071 } 00:30:58.071 }, 00:30:58.071 "base_bdevs_list": [ 00:30:58.071 { 00:30:58.071 "name": "spare", 00:30:58.071 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:58.071 "is_configured": true, 00:30:58.071 "data_offset": 0, 00:30:58.071 "data_size": 65536 00:30:58.071 }, 00:30:58.071 { 00:30:58.071 "name": "BaseBdev2", 00:30:58.071 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:58.071 "is_configured": true, 00:30:58.071 "data_offset": 0, 00:30:58.071 "data_size": 65536 00:30:58.071 } 00:30:58.071 ] 00:30:58.071 }' 00:30:58.071 02:00:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:58.071 02:00:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:58.071 02:00:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:58.072 [2024-04-24 02:00:57.898583] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:58.072 [2024-04-24 02:00:57.898842] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:58.072 [2024-04-24 02:00:57.899070] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.072 02:00:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:30:58.072 02:00:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.007 02:00:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:59.272 "name": "raid_bdev1", 00:30:59.272 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:59.272 "strip_size_kb": 0, 00:30:59.272 "state": "online", 00:30:59.272 "raid_level": "raid1", 00:30:59.272 "superblock": false, 00:30:59.272 "num_base_bdevs": 2, 00:30:59.272 "num_base_bdevs_discovered": 2, 00:30:59.272 "num_base_bdevs_operational": 2, 00:30:59.272 "base_bdevs_list": [ 00:30:59.272 { 00:30:59.272 "name": "spare", 00:30:59.272 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:59.272 "is_configured": true, 00:30:59.272 "data_offset": 0, 00:30:59.272 "data_size": 65536 00:30:59.272 }, 00:30:59.272 { 00:30:59.272 "name": "BaseBdev2", 00:30:59.272 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:59.272 "is_configured": true, 00:30:59.272 "data_offset": 0, 00:30:59.272 "data_size": 65536 00:30:59.272 } 00:30:59.272 ] 00:30:59.272 }' 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@660 -- # break 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.272 02:00:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.543 02:00:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:30:59.543 "name": "raid_bdev1", 00:30:59.543 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:30:59.543 "strip_size_kb": 0, 00:30:59.543 "state": "online", 00:30:59.543 "raid_level": "raid1", 00:30:59.543 "superblock": false, 00:30:59.543 "num_base_bdevs": 2, 00:30:59.543 "num_base_bdevs_discovered": 2, 00:30:59.543 "num_base_bdevs_operational": 2, 00:30:59.543 "base_bdevs_list": [ 00:30:59.543 { 00:30:59.543 "name": "spare", 00:30:59.543 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:30:59.543 "is_configured": true, 00:30:59.543 "data_offset": 0, 00:30:59.543 "data_size": 65536 00:30:59.543 }, 00:30:59.543 { 00:30:59.543 "name": "BaseBdev2", 00:30:59.543 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:30:59.543 "is_configured": true, 00:30:59.543 "data_offset": 0, 00:30:59.543 "data_size": 65536 00:30:59.543 } 00:30:59.543 ] 00:30:59.543 }' 00:30:59.543 02:00:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.802 02:00:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.061 02:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:00.061 "name": "raid_bdev1", 00:31:00.061 "uuid": "a4a1e1b2-6a6c-4c81-9c1e-c539ad9dcc48", 00:31:00.061 "strip_size_kb": 0, 00:31:00.061 "state": "online", 00:31:00.061 "raid_level": "raid1", 00:31:00.061 "superblock": false, 00:31:00.061 "num_base_bdevs": 2, 00:31:00.061 "num_base_bdevs_discovered": 2, 00:31:00.061 "num_base_bdevs_operational": 2, 00:31:00.061 "base_bdevs_list": [ 00:31:00.061 { 00:31:00.061 "name": "spare", 00:31:00.061 "uuid": "e6b32781-5a4a-5140-ba54-523bb3f6af55", 00:31:00.061 "is_configured": true, 00:31:00.061 "data_offset": 0, 00:31:00.061 "data_size": 65536 00:31:00.061 }, 00:31:00.061 { 00:31:00.061 "name": "BaseBdev2", 00:31:00.061 "uuid": "39f39d51-85bd-4c57-97c0-2c142538fb4d", 00:31:00.061 "is_configured": true, 00:31:00.061 "data_offset": 0, 00:31:00.061 "data_size": 65536 00:31:00.061 } 00:31:00.061 ] 00:31:00.061 }' 00:31:00.061 02:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:00.061 02:00:59 -- common/autotest_common.sh@10 -- # set +x 00:31:00.627 02:01:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:00.885 [2024-04-24 02:01:00.814077] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.885 [2024-04-24 02:01:00.814326] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.885 [2024-04-24 02:01:00.814524] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.885 [2024-04-24 02:01:00.814757] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.885 [2024-04-24 02:01:00.814871] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:31:00.885 02:01:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.885 02:01:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:31:01.143 02:01:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:31:01.143 02:01:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:31:01.143 02:01:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@12 -- # local i 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:01.143 02:01:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:01.401 /dev/nbd0 00:31:01.401 02:01:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:01.401 02:01:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:01.401 02:01:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:01.401 02:01:01 -- common/autotest_common.sh@855 -- # local i 00:31:01.401 02:01:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:01.401 02:01:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:01.401 02:01:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:01.401 02:01:01 -- common/autotest_common.sh@859 -- # break 00:31:01.401 02:01:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:01.401 02:01:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:01.401 02:01:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:01.401 1+0 records in 00:31:01.401 1+0 records out 00:31:01.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028558 s, 14.3 MB/s 00:31:01.401 02:01:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.402 02:01:01 -- common/autotest_common.sh@872 -- # size=4096 00:31:01.402 02:01:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.402 02:01:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:01.402 02:01:01 -- common/autotest_common.sh@875 -- # return 0 00:31:01.402 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:01.402 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:01.402 02:01:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:01.660 /dev/nbd1 00:31:01.660 02:01:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:01.660 02:01:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:01.660 02:01:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:01.660 02:01:01 -- common/autotest_common.sh@855 -- # local i 00:31:01.660 02:01:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:01.660 02:01:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:01.660 02:01:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:01.660 02:01:01 -- common/autotest_common.sh@859 -- # break 00:31:01.660 02:01:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:01.660 02:01:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:01.660 02:01:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:01.660 1+0 records in 00:31:01.660 1+0 records out 00:31:01.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422599 s, 9.7 MB/s 00:31:01.660 02:01:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.660 02:01:01 -- common/autotest_common.sh@872 -- # size=4096 00:31:01.660 02:01:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.660 02:01:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:01.660 02:01:01 -- common/autotest_common.sh@875 -- # return 0 00:31:01.660 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:01.660 02:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:01.660 02:01:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:01.918 02:01:01 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@51 -- # local i 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:01.918 02:01:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@41 -- # break 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@45 -- # return 0 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:02.177 02:01:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@41 -- # break 00:31:02.459 02:01:02 -- bdev/nbd_common.sh@45 -- # return 0 00:31:02.459 02:01:02 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:31:02.459 02:01:02 -- bdev/bdev_raid.sh@709 -- # killprocess 131111 00:31:02.459 02:01:02 -- common/autotest_common.sh@936 -- # '[' -z 131111 ']' 00:31:02.459 02:01:02 -- common/autotest_common.sh@940 -- # kill -0 131111 00:31:02.459 02:01:02 -- common/autotest_common.sh@941 -- # uname 00:31:02.459 02:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:02.459 02:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131111 00:31:02.459 02:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:02.459 02:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:02.459 02:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131111' 00:31:02.459 killing process with pid 131111 00:31:02.459 02:01:02 -- common/autotest_common.sh@955 -- # kill 131111 00:31:02.459 Received shutdown signal, test time was about 60.000000 seconds 00:31:02.459 00:31:02.459 Latency(us) 00:31:02.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.459 =================================================================================================================== 00:31:02.459 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:02.459 [2024-04-24 02:01:02.364051] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:02.459 02:01:02 -- common/autotest_common.sh@960 -- # wait 131111 00:31:02.772 [2024-04-24 02:01:02.712114] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:31:04.147 00:31:04.147 real 0m24.259s 00:31:04.147 user 0m33.033s 00:31:04.147 sys 0m4.674s 00:31:04.147 02:01:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:04.147 02:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:04.147 ************************************ 00:31:04.147 END TEST raid_rebuild_test 00:31:04.147 ************************************ 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:31:04.147 02:01:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:31:04.147 02:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:04.147 02:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:04.147 ************************************ 00:31:04.147 START TEST raid_rebuild_test_sb 00:31:04.147 ************************************ 00:31:04.147 02:01:04 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true false 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=131682 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:04.147 02:01:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131682 /var/tmp/spdk-raid.sock 00:31:04.147 02:01:04 -- common/autotest_common.sh@817 -- # '[' -z 131682 ']' 00:31:04.147 02:01:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:04.147 02:01:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:04.147 02:01:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:04.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:04.147 02:01:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:04.147 02:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:04.405 [2024-04-24 02:01:04.296253] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:31:04.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:04.405 Zero copy mechanism will not be used. 00:31:04.405 [2024-04-24 02:01:04.296533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131682 ] 00:31:04.405 [2024-04-24 02:01:04.473763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.663 [2024-04-24 02:01:04.707856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.921 [2024-04-24 02:01:04.990739] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:05.488 02:01:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:05.488 02:01:05 -- common/autotest_common.sh@850 -- # return 0 00:31:05.488 02:01:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:05.488 02:01:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:31:05.488 02:01:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:05.488 BaseBdev1_malloc 00:31:05.782 02:01:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:05.782 [2024-04-24 02:01:05.829807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:05.782 [2024-04-24 02:01:05.829938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.782 [2024-04-24 02:01:05.829983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:31:05.782 [2024-04-24 02:01:05.830084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.782 [2024-04-24 02:01:05.832806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.782 [2024-04-24 02:01:05.832881] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:05.782 BaseBdev1 00:31:06.040 02:01:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:06.040 02:01:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:31:06.040 02:01:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:06.298 BaseBdev2_malloc 00:31:06.298 02:01:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:06.556 [2024-04-24 02:01:06.417517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:06.556 [2024-04-24 02:01:06.417618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.556 [2024-04-24 02:01:06.417665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:06.556 [2024-04-24 02:01:06.417725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.556 [2024-04-24 02:01:06.420384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.556 [2024-04-24 02:01:06.420448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:06.556 BaseBdev2 00:31:06.556 02:01:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:06.813 spare_malloc 00:31:06.813 02:01:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:07.071 spare_delay 00:31:07.071 02:01:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:07.328 [2024-04-24 02:01:07.280630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:07.328 [2024-04-24 02:01:07.280730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:07.328 [2024-04-24 02:01:07.280778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:31:07.328 [2024-04-24 02:01:07.280830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:07.328 [2024-04-24 02:01:07.283543] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:07.328 [2024-04-24 02:01:07.283615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:07.328 spare 00:31:07.328 02:01:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:07.585 [2024-04-24 02:01:07.588809] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:07.585 [2024-04-24 02:01:07.591115] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:07.585 [2024-04-24 02:01:07.591385] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:31:07.585 [2024-04-24 02:01:07.591408] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:07.585 [2024-04-24 02:01:07.591571] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:07.585 [2024-04-24 02:01:07.591951] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:31:07.585 [2024-04-24 02:01:07.591971] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:31:07.585 [2024-04-24 02:01:07.592145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.585 02:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.844 02:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:07.844 "name": "raid_bdev1", 00:31:07.844 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:07.844 "strip_size_kb": 0, 00:31:07.844 "state": "online", 00:31:07.844 "raid_level": "raid1", 00:31:07.844 "superblock": true, 00:31:07.844 "num_base_bdevs": 2, 00:31:07.844 "num_base_bdevs_discovered": 2, 00:31:07.844 "num_base_bdevs_operational": 2, 00:31:07.844 "base_bdevs_list": [ 00:31:07.844 { 00:31:07.844 "name": "BaseBdev1", 00:31:07.844 "uuid": "acf8bb0a-33ab-524f-a7cf-d7d622aa0268", 00:31:07.844 "is_configured": true, 00:31:07.844 "data_offset": 2048, 00:31:07.844 "data_size": 63488 00:31:07.844 }, 00:31:07.844 { 00:31:07.844 "name": "BaseBdev2", 00:31:07.844 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:07.844 "is_configured": true, 00:31:07.844 "data_offset": 2048, 00:31:07.844 "data_size": 63488 00:31:07.844 } 00:31:07.844 ] 00:31:07.844 }' 00:31:07.844 02:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:07.844 02:01:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.410 02:01:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:08.410 02:01:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:31:08.668 [2024-04-24 02:01:08.681227] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:08.668 02:01:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:31:08.668 02:01:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:08.668 02:01:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.925 02:01:08 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:31:08.925 02:01:08 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:31:08.925 02:01:08 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:31:08.925 02:01:08 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@12 -- # local i 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:08.925 02:01:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:09.184 [2024-04-24 02:01:09.205355] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:09.184 /dev/nbd0 00:31:09.184 02:01:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:09.184 02:01:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:09.184 02:01:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:09.184 02:01:09 -- common/autotest_common.sh@855 -- # local i 00:31:09.184 02:01:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:09.184 02:01:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:09.184 02:01:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:09.184 02:01:09 -- common/autotest_common.sh@859 -- # break 00:31:09.184 02:01:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:09.184 02:01:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:09.184 02:01:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:09.184 1+0 records in 00:31:09.184 1+0 records out 00:31:09.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428363 s, 9.6 MB/s 00:31:09.184 02:01:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:09.184 02:01:09 -- common/autotest_common.sh@872 -- # size=4096 00:31:09.184 02:01:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:09.442 02:01:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:09.442 02:01:09 -- common/autotest_common.sh@875 -- # return 0 00:31:09.442 02:01:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:09.442 02:01:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:09.442 02:01:09 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:31:09.442 02:01:09 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:31:09.442 02:01:09 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:31:15.997 63488+0 records in 00:31:15.997 63488+0 records out 00:31:15.997 32505856 bytes (33 MB, 31 MiB) copied, 5.59674 s, 5.8 MB/s 00:31:15.997 02:01:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@51 -- # local i 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:15.997 02:01:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:15.997 [2024-04-24 02:01:15.132706] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@41 -- # break 00:31:15.997 02:01:15 -- bdev/nbd_common.sh@45 -- # return 0 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:15.997 [2024-04-24 02:01:15.416515] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:15.997 "name": "raid_bdev1", 00:31:15.997 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:15.997 "strip_size_kb": 0, 00:31:15.997 "state": "online", 00:31:15.997 "raid_level": "raid1", 00:31:15.997 "superblock": true, 00:31:15.997 "num_base_bdevs": 2, 00:31:15.997 "num_base_bdevs_discovered": 1, 00:31:15.997 "num_base_bdevs_operational": 1, 00:31:15.997 "base_bdevs_list": [ 00:31:15.997 { 00:31:15.997 "name": null, 00:31:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.997 "is_configured": false, 00:31:15.997 "data_offset": 2048, 00:31:15.997 "data_size": 63488 00:31:15.997 }, 00:31:15.997 { 00:31:15.997 "name": "BaseBdev2", 00:31:15.997 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:15.997 "is_configured": true, 00:31:15.997 "data_offset": 2048, 00:31:15.997 "data_size": 63488 00:31:15.997 } 00:31:15.997 ] 00:31:15.997 }' 00:31:15.997 02:01:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:15.997 02:01:15 -- common/autotest_common.sh@10 -- # set +x 00:31:16.255 02:01:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:16.512 [2024-04-24 02:01:16.576775] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:31:16.512 [2024-04-24 02:01:16.577073] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:16.768 [2024-04-24 02:01:16.598379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:31:16.768 [2024-04-24 02:01:16.600972] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:16.768 02:01:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.702 02:01:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.959 02:01:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:17.959 "name": "raid_bdev1", 00:31:17.959 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:17.959 "strip_size_kb": 0, 00:31:17.959 "state": "online", 00:31:17.959 "raid_level": "raid1", 00:31:17.959 "superblock": true, 00:31:17.959 "num_base_bdevs": 2, 00:31:17.959 "num_base_bdevs_discovered": 2, 00:31:17.959 "num_base_bdevs_operational": 2, 00:31:17.959 "process": { 00:31:17.959 "type": "rebuild", 00:31:17.959 "target": "spare", 00:31:17.959 "progress": { 00:31:17.959 "blocks": 26624, 00:31:17.959 "percent": 41 00:31:17.959 } 00:31:17.959 }, 00:31:17.959 "base_bdevs_list": [ 00:31:17.959 { 00:31:17.959 "name": "spare", 00:31:17.959 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:17.959 "is_configured": true, 00:31:17.959 "data_offset": 2048, 00:31:17.959 "data_size": 63488 00:31:17.959 }, 00:31:17.959 { 00:31:17.959 "name": "BaseBdev2", 00:31:17.959 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:17.959 "is_configured": true, 00:31:17.959 "data_offset": 2048, 00:31:17.959 "data_size": 63488 00:31:17.959 } 00:31:17.959 ] 00:31:17.959 }' 00:31:17.959 02:01:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:17.959 02:01:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:17.959 02:01:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:17.959 02:01:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:17.959 02:01:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:18.523 [2024-04-24 02:01:18.312160] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:18.523 [2024-04-24 02:01:18.312677] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:18.523 [2024-04-24 02:01:18.312877] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:18.523 02:01:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:18.523 02:01:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:18.523 02:01:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.524 02:01:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.781 02:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:18.781 "name": "raid_bdev1", 00:31:18.781 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:18.781 "strip_size_kb": 0, 00:31:18.781 "state": "online", 00:31:18.781 "raid_level": "raid1", 00:31:18.781 "superblock": true, 00:31:18.781 "num_base_bdevs": 2, 00:31:18.781 "num_base_bdevs_discovered": 1, 00:31:18.781 "num_base_bdevs_operational": 1, 00:31:18.781 "base_bdevs_list": [ 00:31:18.781 { 00:31:18.781 "name": null, 00:31:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.781 "is_configured": false, 00:31:18.781 "data_offset": 2048, 00:31:18.781 "data_size": 63488 00:31:18.781 }, 00:31:18.781 { 00:31:18.781 "name": "BaseBdev2", 00:31:18.781 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:18.781 "is_configured": true, 00:31:18.781 "data_offset": 2048, 00:31:18.781 "data_size": 63488 00:31:18.781 } 00:31:18.781 ] 00:31:18.781 }' 00:31:18.781 02:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:18.781 02:01:18 -- common/autotest_common.sh@10 -- # set +x 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.347 02:01:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:19.652 "name": "raid_bdev1", 00:31:19.652 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:19.652 "strip_size_kb": 0, 00:31:19.652 "state": "online", 00:31:19.652 "raid_level": "raid1", 00:31:19.652 "superblock": true, 00:31:19.652 "num_base_bdevs": 2, 00:31:19.652 "num_base_bdevs_discovered": 1, 00:31:19.652 "num_base_bdevs_operational": 1, 00:31:19.652 "base_bdevs_list": [ 00:31:19.652 { 00:31:19.652 "name": null, 00:31:19.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.652 "is_configured": false, 00:31:19.652 "data_offset": 2048, 00:31:19.652 "data_size": 63488 00:31:19.652 }, 00:31:19.652 { 00:31:19.652 "name": "BaseBdev2", 00:31:19.652 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:19.652 "is_configured": true, 00:31:19.652 "data_offset": 2048, 00:31:19.652 "data_size": 63488 00:31:19.652 } 00:31:19.652 ] 00:31:19.652 }' 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:31:19.652 02:01:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:20.230 [2024-04-24 02:01:20.029739] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:31:20.230 [2024-04-24 02:01:20.030041] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:20.230 [2024-04-24 02:01:20.049171] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:31:20.230 [2024-04-24 02:01:20.051753] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:20.230 02:01:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.161 02:01:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:21.420 "name": "raid_bdev1", 00:31:21.420 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:21.420 "strip_size_kb": 0, 00:31:21.420 "state": "online", 00:31:21.420 "raid_level": "raid1", 00:31:21.420 "superblock": true, 00:31:21.420 "num_base_bdevs": 2, 00:31:21.420 "num_base_bdevs_discovered": 2, 00:31:21.420 "num_base_bdevs_operational": 2, 00:31:21.420 "process": { 00:31:21.420 "type": "rebuild", 00:31:21.420 "target": "spare", 00:31:21.420 "progress": { 00:31:21.420 "blocks": 24576, 00:31:21.420 "percent": 38 00:31:21.420 } 00:31:21.420 }, 00:31:21.420 "base_bdevs_list": [ 00:31:21.420 { 00:31:21.420 "name": "spare", 00:31:21.420 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:21.420 "is_configured": true, 00:31:21.420 "data_offset": 2048, 00:31:21.420 "data_size": 63488 00:31:21.420 }, 00:31:21.420 { 00:31:21.420 "name": "BaseBdev2", 00:31:21.420 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:21.420 "is_configured": true, 00:31:21.420 "data_offset": 2048, 00:31:21.420 "data_size": 63488 00:31:21.420 } 00:31:21.420 ] 00:31:21.420 }' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:31:21.420 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@657 -- # local timeout=456 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.420 02:01:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.987 02:01:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:21.987 "name": "raid_bdev1", 00:31:21.988 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:21.988 "strip_size_kb": 0, 00:31:21.988 "state": "online", 00:31:21.988 "raid_level": "raid1", 00:31:21.988 "superblock": true, 00:31:21.988 "num_base_bdevs": 2, 00:31:21.988 "num_base_bdevs_discovered": 2, 00:31:21.988 "num_base_bdevs_operational": 2, 00:31:21.988 "process": { 00:31:21.988 "type": "rebuild", 00:31:21.988 "target": "spare", 00:31:21.988 "progress": { 00:31:21.988 "blocks": 34816, 00:31:21.988 "percent": 54 00:31:21.988 } 00:31:21.988 }, 00:31:21.988 "base_bdevs_list": [ 00:31:21.988 { 00:31:21.988 "name": "spare", 00:31:21.988 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:21.988 "is_configured": true, 00:31:21.988 "data_offset": 2048, 00:31:21.988 "data_size": 63488 00:31:21.988 }, 00:31:21.988 { 00:31:21.988 "name": "BaseBdev2", 00:31:21.988 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:21.988 "is_configured": true, 00:31:21.988 "data_offset": 2048, 00:31:21.988 "data_size": 63488 00:31:21.988 } 00:31:21.988 ] 00:31:21.988 }' 00:31:21.988 02:01:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:21.988 02:01:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.988 02:01:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:21.988 02:01:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.988 02:01:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:22.919 02:01:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.920 02:01:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.177 [2024-04-24 02:01:23.175166] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:23.177 [2024-04-24 02:01:23.175523] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:23.177 [2024-04-24 02:01:23.175816] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.177 02:01:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:23.177 "name": "raid_bdev1", 00:31:23.177 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:23.177 "strip_size_kb": 0, 00:31:23.177 "state": "online", 00:31:23.177 "raid_level": "raid1", 00:31:23.177 "superblock": true, 00:31:23.177 "num_base_bdevs": 2, 00:31:23.177 "num_base_bdevs_discovered": 2, 00:31:23.177 "num_base_bdevs_operational": 2, 00:31:23.177 "process": { 00:31:23.177 "type": "rebuild", 00:31:23.177 "target": "spare", 00:31:23.177 "progress": { 00:31:23.177 "blocks": 61440, 00:31:23.177 "percent": 96 00:31:23.177 } 00:31:23.177 }, 00:31:23.177 "base_bdevs_list": [ 00:31:23.177 { 00:31:23.177 "name": "spare", 00:31:23.177 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:23.177 "is_configured": true, 00:31:23.177 "data_offset": 2048, 00:31:23.177 "data_size": 63488 00:31:23.177 }, 00:31:23.177 { 00:31:23.177 "name": "BaseBdev2", 00:31:23.177 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:23.177 "is_configured": true, 00:31:23.177 "data_offset": 2048, 00:31:23.177 "data_size": 63488 00:31:23.178 } 00:31:23.178 ] 00:31:23.178 }' 00:31:23.178 02:01:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:23.178 02:01:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:23.178 02:01:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:23.435 02:01:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:23.435 02:01:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.388 02:01:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:24.646 "name": "raid_bdev1", 00:31:24.646 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:24.646 "strip_size_kb": 0, 00:31:24.646 "state": "online", 00:31:24.646 "raid_level": "raid1", 00:31:24.646 "superblock": true, 00:31:24.646 "num_base_bdevs": 2, 00:31:24.646 "num_base_bdevs_discovered": 2, 00:31:24.646 "num_base_bdevs_operational": 2, 00:31:24.646 "base_bdevs_list": [ 00:31:24.646 { 00:31:24.646 "name": "spare", 00:31:24.646 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:24.646 "is_configured": true, 00:31:24.646 "data_offset": 2048, 00:31:24.646 "data_size": 63488 00:31:24.646 }, 00:31:24.646 { 00:31:24.646 "name": "BaseBdev2", 00:31:24.646 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:24.646 "is_configured": true, 00:31:24.646 "data_offset": 2048, 00:31:24.646 "data_size": 63488 00:31:24.646 } 00:31:24.646 ] 00:31:24.646 }' 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@660 -- # break 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.646 02:01:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.946 02:01:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:24.946 "name": "raid_bdev1", 00:31:24.946 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:24.946 "strip_size_kb": 0, 00:31:24.946 "state": "online", 00:31:24.946 "raid_level": "raid1", 00:31:24.946 "superblock": true, 00:31:24.946 "num_base_bdevs": 2, 00:31:24.946 "num_base_bdevs_discovered": 2, 00:31:24.946 "num_base_bdevs_operational": 2, 00:31:24.946 "base_bdevs_list": [ 00:31:24.946 { 00:31:24.946 "name": "spare", 00:31:24.946 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:24.946 "is_configured": true, 00:31:24.946 "data_offset": 2048, 00:31:24.946 "data_size": 63488 00:31:24.946 }, 00:31:24.946 { 00:31:24.946 "name": "BaseBdev2", 00:31:24.946 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:24.946 "is_configured": true, 00:31:24.946 "data_offset": 2048, 00:31:24.946 "data_size": 63488 00:31:24.946 } 00:31:24.946 ] 00:31:24.946 }' 00:31:24.946 02:01:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:25.205 02:01:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.206 02:01:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.464 02:01:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:25.464 "name": "raid_bdev1", 00:31:25.464 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:25.464 "strip_size_kb": 0, 00:31:25.464 "state": "online", 00:31:25.464 "raid_level": "raid1", 00:31:25.464 "superblock": true, 00:31:25.464 "num_base_bdevs": 2, 00:31:25.464 "num_base_bdevs_discovered": 2, 00:31:25.464 "num_base_bdevs_operational": 2, 00:31:25.464 "base_bdevs_list": [ 00:31:25.464 { 00:31:25.464 "name": "spare", 00:31:25.464 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:25.464 "is_configured": true, 00:31:25.464 "data_offset": 2048, 00:31:25.464 "data_size": 63488 00:31:25.464 }, 00:31:25.464 { 00:31:25.464 "name": "BaseBdev2", 00:31:25.464 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:25.464 "is_configured": true, 00:31:25.464 "data_offset": 2048, 00:31:25.464 "data_size": 63488 00:31:25.464 } 00:31:25.464 ] 00:31:25.464 }' 00:31:25.464 02:01:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:25.464 02:01:25 -- common/autotest_common.sh@10 -- # set +x 00:31:26.028 02:01:26 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:26.593 [2024-04-24 02:01:26.373505] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:26.593 [2024-04-24 02:01:26.373566] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:26.593 [2024-04-24 02:01:26.373693] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:26.593 [2024-04-24 02:01:26.373784] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:26.593 [2024-04-24 02:01:26.373802] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:31:26.594 02:01:26 -- bdev/bdev_raid.sh@671 -- # jq length 00:31:26.594 02:01:26 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.851 02:01:26 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:31:26.851 02:01:26 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:31:26.851 02:01:26 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@12 -- # local i 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:26.851 02:01:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:27.109 /dev/nbd0 00:31:27.109 02:01:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:27.109 02:01:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:27.109 02:01:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:27.109 02:01:27 -- common/autotest_common.sh@855 -- # local i 00:31:27.109 02:01:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:27.109 02:01:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:27.109 02:01:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:27.109 02:01:27 -- common/autotest_common.sh@859 -- # break 00:31:27.109 02:01:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:27.109 02:01:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:27.109 02:01:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:27.109 1+0 records in 00:31:27.109 1+0 records out 00:31:27.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360516 s, 11.4 MB/s 00:31:27.109 02:01:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.109 02:01:27 -- common/autotest_common.sh@872 -- # size=4096 00:31:27.109 02:01:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.109 02:01:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:27.109 02:01:27 -- common/autotest_common.sh@875 -- # return 0 00:31:27.109 02:01:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:27.109 02:01:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:27.109 02:01:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:27.418 /dev/nbd1 00:31:27.418 02:01:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:27.418 02:01:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:27.418 02:01:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:27.418 02:01:27 -- common/autotest_common.sh@855 -- # local i 00:31:27.418 02:01:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:27.418 02:01:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:27.418 02:01:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:27.418 02:01:27 -- common/autotest_common.sh@859 -- # break 00:31:27.418 02:01:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:27.418 02:01:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:27.418 02:01:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:27.418 1+0 records in 00:31:27.418 1+0 records out 00:31:27.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406929 s, 10.1 MB/s 00:31:27.418 02:01:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.418 02:01:27 -- common/autotest_common.sh@872 -- # size=4096 00:31:27.418 02:01:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.418 02:01:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:27.418 02:01:27 -- common/autotest_common.sh@875 -- # return 0 00:31:27.418 02:01:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:27.418 02:01:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:27.418 02:01:27 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:27.678 02:01:27 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@51 -- # local i 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.678 02:01:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:27.936 02:01:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@41 -- # break 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.936 02:01:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:28.501 02:01:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:28.502 02:01:28 -- bdev/nbd_common.sh@41 -- # break 00:31:28.502 02:01:28 -- bdev/nbd_common.sh@45 -- # return 0 00:31:28.502 02:01:28 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:31:28.502 02:01:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:31:28.502 02:01:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:31:28.502 02:01:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:28.760 02:01:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:29.018 [2024-04-24 02:01:28.889574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:29.018 [2024-04-24 02:01:28.889685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.018 [2024-04-24 02:01:28.889726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:31:29.018 [2024-04-24 02:01:28.889762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.018 [2024-04-24 02:01:28.892430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.018 [2024-04-24 02:01:28.892512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:29.018 [2024-04-24 02:01:28.892643] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:29.018 [2024-04-24 02:01:28.892738] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:29.018 BaseBdev1 00:31:29.018 02:01:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:31:29.018 02:01:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:31:29.018 02:01:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:31:29.277 02:01:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:29.277 [2024-04-24 02:01:29.309652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:29.277 [2024-04-24 02:01:29.309776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.277 [2024-04-24 02:01:29.309820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:29.277 [2024-04-24 02:01:29.309853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.277 [2024-04-24 02:01:29.310373] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.277 [2024-04-24 02:01:29.310424] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:29.277 [2024-04-24 02:01:29.310554] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:31:29.277 [2024-04-24 02:01:29.310567] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:31:29.277 [2024-04-24 02:01:29.310576] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:29.277 [2024-04-24 02:01:29.310603] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:31:29.277 [2024-04-24 02:01:29.310687] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:29.277 BaseBdev2 00:31:29.277 02:01:29 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:29.536 02:01:29 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:29.794 [2024-04-24 02:01:29.801781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:29.794 [2024-04-24 02:01:29.801882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.794 [2024-04-24 02:01:29.801928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:29.794 [2024-04-24 02:01:29.801950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.794 [2024-04-24 02:01:29.802515] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.794 [2024-04-24 02:01:29.802577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:29.794 [2024-04-24 02:01:29.802734] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:31:29.794 [2024-04-24 02:01:29.802771] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:29.794 spare 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.794 02:01:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.052 [2024-04-24 02:01:29.902901] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:31:30.052 [2024-04-24 02:01:29.902972] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:30.052 [2024-04-24 02:01:29.903237] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:31:30.052 [2024-04-24 02:01:29.903860] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:31:30.052 [2024-04-24 02:01:29.903929] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:31:30.052 [2024-04-24 02:01:29.904154] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.052 02:01:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:30.052 "name": "raid_bdev1", 00:31:30.052 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:30.052 "strip_size_kb": 0, 00:31:30.052 "state": "online", 00:31:30.052 "raid_level": "raid1", 00:31:30.052 "superblock": true, 00:31:30.052 "num_base_bdevs": 2, 00:31:30.052 "num_base_bdevs_discovered": 2, 00:31:30.052 "num_base_bdevs_operational": 2, 00:31:30.052 "base_bdevs_list": [ 00:31:30.052 { 00:31:30.052 "name": "spare", 00:31:30.052 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:30.052 "is_configured": true, 00:31:30.052 "data_offset": 2048, 00:31:30.052 "data_size": 63488 00:31:30.052 }, 00:31:30.052 { 00:31:30.052 "name": "BaseBdev2", 00:31:30.052 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:30.052 "is_configured": true, 00:31:30.052 "data_offset": 2048, 00:31:30.052 "data_size": 63488 00:31:30.052 } 00:31:30.052 ] 00:31:30.052 }' 00:31:30.052 02:01:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:30.052 02:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:30.690 "name": "raid_bdev1", 00:31:30.690 "uuid": "17244336-0877-48de-a006-d23fdfddbae9", 00:31:30.690 "strip_size_kb": 0, 00:31:30.690 "state": "online", 00:31:30.690 "raid_level": "raid1", 00:31:30.690 "superblock": true, 00:31:30.690 "num_base_bdevs": 2, 00:31:30.690 "num_base_bdevs_discovered": 2, 00:31:30.690 "num_base_bdevs_operational": 2, 00:31:30.690 "base_bdevs_list": [ 00:31:30.690 { 00:31:30.690 "name": "spare", 00:31:30.690 "uuid": "0c8caefe-7ebb-53b8-b239-5131b0795d93", 00:31:30.690 "is_configured": true, 00:31:30.690 "data_offset": 2048, 00:31:30.690 "data_size": 63488 00:31:30.690 }, 00:31:30.690 { 00:31:30.690 "name": "BaseBdev2", 00:31:30.690 "uuid": "64d34fe2-5fb7-522b-ad7b-186c6df7c234", 00:31:30.690 "is_configured": true, 00:31:30.690 "data_offset": 2048, 00:31:30.690 "data_size": 63488 00:31:30.690 } 00:31:30.690 ] 00:31:30.690 }' 00:31:30.690 02:01:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:30.949 02:01:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:30.949 02:01:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:30.949 02:01:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:31:30.949 02:01:30 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:30.949 02:01:30 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.207 02:01:31 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:31:31.207 02:01:31 -- bdev/bdev_raid.sh@709 -- # killprocess 131682 00:31:31.207 02:01:31 -- common/autotest_common.sh@936 -- # '[' -z 131682 ']' 00:31:31.207 02:01:31 -- common/autotest_common.sh@940 -- # kill -0 131682 00:31:31.207 02:01:31 -- common/autotest_common.sh@941 -- # uname 00:31:31.207 02:01:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:31.207 02:01:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131682 00:31:31.207 02:01:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:31.207 02:01:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:31.207 02:01:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131682' 00:31:31.207 killing process with pid 131682 00:31:31.207 02:01:31 -- common/autotest_common.sh@955 -- # kill 131682 00:31:31.207 Received shutdown signal, test time was about 60.000000 seconds 00:31:31.207 00:31:31.207 Latency(us) 00:31:31.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.208 =================================================================================================================== 00:31:31.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:31.208 [2024-04-24 02:01:31.120297] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:31.208 [2024-04-24 02:01:31.120390] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:31.208 [2024-04-24 02:01:31.120457] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:31.208 [2024-04-24 02:01:31.120467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:31:31.208 02:01:31 -- common/autotest_common.sh@960 -- # wait 131682 00:31:31.465 [2024-04-24 02:01:31.460428] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:32.835 02:01:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:31:32.835 00:31:32.835 real 0m28.686s 00:31:32.835 user 0m40.586s 00:31:32.835 sys 0m5.621s 00:31:32.835 02:01:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:32.835 ************************************ 00:31:32.835 END TEST raid_rebuild_test_sb 00:31:32.835 ************************************ 00:31:32.835 02:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:31:33.093 02:01:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:31:33.093 02:01:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:33.093 02:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:33.093 ************************************ 00:31:33.093 START TEST raid_rebuild_test_io 00:31:33.093 ************************************ 00:31:33.093 02:01:32 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false true 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:31:33.093 02:01:32 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:31:33.093 02:01:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=132346 00:31:33.093 02:01:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132346 /var/tmp/spdk-raid.sock 00:31:33.093 02:01:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:33.093 02:01:33 -- common/autotest_common.sh@817 -- # '[' -z 132346 ']' 00:31:33.093 02:01:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:33.093 02:01:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:33.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:33.093 02:01:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:33.093 02:01:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:33.093 02:01:33 -- common/autotest_common.sh@10 -- # set +x 00:31:33.093 [2024-04-24 02:01:33.067023] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:31:33.093 [2024-04-24 02:01:33.067182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132346 ] 00:31:33.093 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:33.093 Zero copy mechanism will not be used. 00:31:33.351 [2024-04-24 02:01:33.233831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.608 [2024-04-24 02:01:33.473717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.870 [2024-04-24 02:01:33.728134] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:34.128 02:01:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:34.128 02:01:33 -- common/autotest_common.sh@850 -- # return 0 00:31:34.128 02:01:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:34.128 02:01:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:31:34.128 02:01:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:34.128 BaseBdev1 00:31:34.128 02:01:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:34.128 02:01:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:31:34.128 02:01:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:34.618 BaseBdev2 00:31:34.618 02:01:34 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:34.876 spare_malloc 00:31:34.876 02:01:34 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:35.134 spare_delay 00:31:35.134 02:01:35 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:35.392 [2024-04-24 02:01:35.313165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:35.392 [2024-04-24 02:01:35.313268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.392 [2024-04-24 02:01:35.313304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:31:35.392 [2024-04-24 02:01:35.313344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.392 [2024-04-24 02:01:35.315800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.392 [2024-04-24 02:01:35.315860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:35.392 spare 00:31:35.392 02:01:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:35.650 [2024-04-24 02:01:35.569268] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:35.650 [2024-04-24 02:01:35.571467] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:35.650 [2024-04-24 02:01:35.571556] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:31:35.650 [2024-04-24 02:01:35.571566] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:35.650 [2024-04-24 02:01:35.571729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:31:35.650 [2024-04-24 02:01:35.572081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:31:35.650 [2024-04-24 02:01:35.572102] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:31:35.650 [2024-04-24 02:01:35.572292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.650 02:01:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.909 02:01:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:35.909 "name": "raid_bdev1", 00:31:35.909 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:35.909 "strip_size_kb": 0, 00:31:35.909 "state": "online", 00:31:35.909 "raid_level": "raid1", 00:31:35.909 "superblock": false, 00:31:35.909 "num_base_bdevs": 2, 00:31:35.909 "num_base_bdevs_discovered": 2, 00:31:35.909 "num_base_bdevs_operational": 2, 00:31:35.909 "base_bdevs_list": [ 00:31:35.909 { 00:31:35.909 "name": "BaseBdev1", 00:31:35.909 "uuid": "032f5aeb-8799-4d27-8438-1fabb15b5050", 00:31:35.909 "is_configured": true, 00:31:35.909 "data_offset": 0, 00:31:35.909 "data_size": 65536 00:31:35.909 }, 00:31:35.909 { 00:31:35.909 "name": "BaseBdev2", 00:31:35.909 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:35.909 "is_configured": true, 00:31:35.909 "data_offset": 0, 00:31:35.909 "data_size": 65536 00:31:35.909 } 00:31:35.909 ] 00:31:35.909 }' 00:31:35.909 02:01:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:35.909 02:01:35 -- common/autotest_common.sh@10 -- # set +x 00:31:36.476 02:01:36 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:36.476 02:01:36 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:31:36.733 [2024-04-24 02:01:36.565629] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:36.733 02:01:36 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:31:36.733 02:01:36 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.734 02:01:36 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:36.734 02:01:36 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:31:36.734 02:01:36 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:31:36.734 02:01:36 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:36.734 02:01:36 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:36.992 [2024-04-24 02:01:36.895681] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:31:36.992 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:36.992 Zero copy mechanism will not be used. 00:31:36.992 Running I/O for 60 seconds... 00:31:36.992 [2024-04-24 02:01:37.033114] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:36.992 [2024-04-24 02:01:37.039479] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:36.992 02:01:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:37.250 02:01:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.250 02:01:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.508 02:01:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:37.508 "name": "raid_bdev1", 00:31:37.508 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:37.508 "strip_size_kb": 0, 00:31:37.508 "state": "online", 00:31:37.508 "raid_level": "raid1", 00:31:37.508 "superblock": false, 00:31:37.508 "num_base_bdevs": 2, 00:31:37.508 "num_base_bdevs_discovered": 1, 00:31:37.508 "num_base_bdevs_operational": 1, 00:31:37.508 "base_bdevs_list": [ 00:31:37.508 { 00:31:37.508 "name": null, 00:31:37.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.508 "is_configured": false, 00:31:37.508 "data_offset": 0, 00:31:37.508 "data_size": 65536 00:31:37.508 }, 00:31:37.508 { 00:31:37.508 "name": "BaseBdev2", 00:31:37.508 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:37.508 "is_configured": true, 00:31:37.508 "data_offset": 0, 00:31:37.508 "data_size": 65536 00:31:37.508 } 00:31:37.508 ] 00:31:37.508 }' 00:31:37.508 02:01:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:37.508 02:01:37 -- common/autotest_common.sh@10 -- # set +x 00:31:38.074 02:01:38 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:38.385 [2024-04-24 02:01:38.295478] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:31:38.385 [2024-04-24 02:01:38.295537] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:38.385 02:01:38 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:31:38.385 [2024-04-24 02:01:38.366196] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:38.385 [2024-04-24 02:01:38.368550] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:38.642 [2024-04-24 02:01:38.481404] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:38.642 [2024-04-24 02:01:38.481917] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:38.642 [2024-04-24 02:01:38.605407] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:38.642 [2024-04-24 02:01:38.605707] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:38.900 [2024-04-24 02:01:38.864403] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:39.158 [2024-04-24 02:01:38.997719] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:39.158 [2024-04-24 02:01:38.998072] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:39.416 [2024-04-24 02:01:39.329417] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.416 02:01:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.416 [2024-04-24 02:01:39.463290] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:39.416 [2024-04-24 02:01:39.463598] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:39.674 02:01:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:39.674 "name": "raid_bdev1", 00:31:39.674 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:39.674 "strip_size_kb": 0, 00:31:39.674 "state": "online", 00:31:39.674 "raid_level": "raid1", 00:31:39.674 "superblock": false, 00:31:39.674 "num_base_bdevs": 2, 00:31:39.674 "num_base_bdevs_discovered": 2, 00:31:39.674 "num_base_bdevs_operational": 2, 00:31:39.674 "process": { 00:31:39.674 "type": "rebuild", 00:31:39.674 "target": "spare", 00:31:39.674 "progress": { 00:31:39.674 "blocks": 16384, 00:31:39.674 "percent": 25 00:31:39.674 } 00:31:39.674 }, 00:31:39.674 "base_bdevs_list": [ 00:31:39.674 { 00:31:39.674 "name": "spare", 00:31:39.674 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:39.674 "is_configured": true, 00:31:39.674 "data_offset": 0, 00:31:39.674 "data_size": 65536 00:31:39.674 }, 00:31:39.674 { 00:31:39.674 "name": "BaseBdev2", 00:31:39.674 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:39.674 "is_configured": true, 00:31:39.674 "data_offset": 0, 00:31:39.674 "data_size": 65536 00:31:39.674 } 00:31:39.674 ] 00:31:39.674 }' 00:31:39.674 02:01:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:39.674 02:01:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.674 02:01:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:39.931 02:01:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.931 02:01:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:39.931 [2024-04-24 02:01:39.832013] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:39.931 [2024-04-24 02:01:39.962234] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:40.188 [2024-04-24 02:01:40.032815] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:40.188 [2024-04-24 02:01:40.183647] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:40.188 [2024-04-24 02:01:40.197970] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.188 [2024-04-24 02:01:40.228262] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.188 02:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.753 02:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:40.753 "name": "raid_bdev1", 00:31:40.753 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:40.753 "strip_size_kb": 0, 00:31:40.753 "state": "online", 00:31:40.753 "raid_level": "raid1", 00:31:40.753 "superblock": false, 00:31:40.753 "num_base_bdevs": 2, 00:31:40.753 "num_base_bdevs_discovered": 1, 00:31:40.753 "num_base_bdevs_operational": 1, 00:31:40.753 "base_bdevs_list": [ 00:31:40.753 { 00:31:40.753 "name": null, 00:31:40.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.753 "is_configured": false, 00:31:40.753 "data_offset": 0, 00:31:40.753 "data_size": 65536 00:31:40.753 }, 00:31:40.753 { 00:31:40.753 "name": "BaseBdev2", 00:31:40.753 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:40.753 "is_configured": true, 00:31:40.753 "data_offset": 0, 00:31:40.753 "data_size": 65536 00:31:40.753 } 00:31:40.753 ] 00:31:40.753 }' 00:31:40.753 02:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:40.753 02:01:40 -- common/autotest_common.sh@10 -- # set +x 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.379 02:01:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:41.636 "name": "raid_bdev1", 00:31:41.636 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:41.636 "strip_size_kb": 0, 00:31:41.636 "state": "online", 00:31:41.636 "raid_level": "raid1", 00:31:41.636 "superblock": false, 00:31:41.636 "num_base_bdevs": 2, 00:31:41.636 "num_base_bdevs_discovered": 1, 00:31:41.636 "num_base_bdevs_operational": 1, 00:31:41.636 "base_bdevs_list": [ 00:31:41.636 { 00:31:41.636 "name": null, 00:31:41.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.636 "is_configured": false, 00:31:41.636 "data_offset": 0, 00:31:41.636 "data_size": 65536 00:31:41.636 }, 00:31:41.636 { 00:31:41.636 "name": "BaseBdev2", 00:31:41.636 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:41.636 "is_configured": true, 00:31:41.636 "data_offset": 0, 00:31:41.636 "data_size": 65536 00:31:41.636 } 00:31:41.636 ] 00:31:41.636 }' 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:31:41.636 02:01:41 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:41.893 [2024-04-24 02:01:41.737404] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:31:41.893 [2024-04-24 02:01:41.737461] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:41.893 02:01:41 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:31:41.893 [2024-04-24 02:01:41.807507] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:41.893 [2024-04-24 02:01:41.809644] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:41.893 [2024-04-24 02:01:41.925440] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:41.893 [2024-04-24 02:01:41.925978] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:42.150 [2024-04-24 02:01:42.050451] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:42.150 [2024-04-24 02:01:42.050743] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:42.408 [2024-04-24 02:01:42.381361] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:42.666 [2024-04-24 02:01:42.506367] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:42.666 [2024-04-24 02:01:42.506663] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.925 02:01:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.925 [2024-04-24 02:01:42.864368] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:43.183 "name": "raid_bdev1", 00:31:43.183 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:43.183 "strip_size_kb": 0, 00:31:43.183 "state": "online", 00:31:43.183 "raid_level": "raid1", 00:31:43.183 "superblock": false, 00:31:43.183 "num_base_bdevs": 2, 00:31:43.183 "num_base_bdevs_discovered": 2, 00:31:43.183 "num_base_bdevs_operational": 2, 00:31:43.183 "process": { 00:31:43.183 "type": "rebuild", 00:31:43.183 "target": "spare", 00:31:43.183 "progress": { 00:31:43.183 "blocks": 14336, 00:31:43.183 "percent": 21 00:31:43.183 } 00:31:43.183 }, 00:31:43.183 "base_bdevs_list": [ 00:31:43.183 { 00:31:43.183 "name": "spare", 00:31:43.183 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:43.183 "is_configured": true, 00:31:43.183 "data_offset": 0, 00:31:43.183 "data_size": 65536 00:31:43.183 }, 00:31:43.183 { 00:31:43.183 "name": "BaseBdev2", 00:31:43.183 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:43.183 "is_configured": true, 00:31:43.183 "data_offset": 0, 00:31:43.183 "data_size": 65536 00:31:43.183 } 00:31:43.183 ] 00:31:43.183 }' 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:43.183 [2024-04-24 02:01:43.082273] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@657 -- # local timeout=478 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.183 02:01:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.441 [2024-04-24 02:01:43.293247] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:43.441 02:01:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:43.441 "name": "raid_bdev1", 00:31:43.441 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:43.441 "strip_size_kb": 0, 00:31:43.441 "state": "online", 00:31:43.441 "raid_level": "raid1", 00:31:43.441 "superblock": false, 00:31:43.441 "num_base_bdevs": 2, 00:31:43.441 "num_base_bdevs_discovered": 2, 00:31:43.441 "num_base_bdevs_operational": 2, 00:31:43.441 "process": { 00:31:43.441 "type": "rebuild", 00:31:43.441 "target": "spare", 00:31:43.441 "progress": { 00:31:43.441 "blocks": 22528, 00:31:43.441 "percent": 34 00:31:43.441 } 00:31:43.441 }, 00:31:43.441 "base_bdevs_list": [ 00:31:43.441 { 00:31:43.441 "name": "spare", 00:31:43.441 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:43.441 "is_configured": true, 00:31:43.441 "data_offset": 0, 00:31:43.441 "data_size": 65536 00:31:43.441 }, 00:31:43.441 { 00:31:43.441 "name": "BaseBdev2", 00:31:43.441 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:43.441 "is_configured": true, 00:31:43.441 "data_offset": 0, 00:31:43.441 "data_size": 65536 00:31:43.441 } 00:31:43.441 ] 00:31:43.441 }' 00:31:43.441 02:01:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:43.441 02:01:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.718 02:01:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:43.718 02:01:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.718 02:01:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:43.718 [2024-04-24 02:01:43.730239] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:43.718 [2024-04-24 02:01:43.730591] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:43.976 [2024-04-24 02:01:44.043422] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:43.976 [2024-04-24 02:01:44.043944] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:44.234 [2024-04-24 02:01:44.171604] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:44.800 "name": "raid_bdev1", 00:31:44.800 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:44.800 "strip_size_kb": 0, 00:31:44.800 "state": "online", 00:31:44.800 "raid_level": "raid1", 00:31:44.800 "superblock": false, 00:31:44.800 "num_base_bdevs": 2, 00:31:44.800 "num_base_bdevs_discovered": 2, 00:31:44.800 "num_base_bdevs_operational": 2, 00:31:44.800 "process": { 00:31:44.800 "type": "rebuild", 00:31:44.800 "target": "spare", 00:31:44.800 "progress": { 00:31:44.800 "blocks": 43008, 00:31:44.800 "percent": 65 00:31:44.800 } 00:31:44.800 }, 00:31:44.800 "base_bdevs_list": [ 00:31:44.800 { 00:31:44.800 "name": "spare", 00:31:44.800 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:44.800 "is_configured": true, 00:31:44.800 "data_offset": 0, 00:31:44.800 "data_size": 65536 00:31:44.800 }, 00:31:44.800 { 00:31:44.800 "name": "BaseBdev2", 00:31:44.800 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:44.800 "is_configured": true, 00:31:44.800 "data_offset": 0, 00:31:44.800 "data_size": 65536 00:31:44.800 } 00:31:44.800 ] 00:31:44.800 }' 00:31:44.800 02:01:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:45.057 02:01:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.058 02:01:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:45.058 02:01:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.058 02:01:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:45.315 [2024-04-24 02:01:45.306879] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:31:45.573 [2024-04-24 02:01:45.533674] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.139 02:01:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.139 [2024-04-24 02:01:45.983496] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:46.139 [2024-04-24 02:01:46.083489] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:46.139 [2024-04-24 02:01:46.086544] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.396 02:01:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:46.396 "name": "raid_bdev1", 00:31:46.396 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:46.396 "strip_size_kb": 0, 00:31:46.397 "state": "online", 00:31:46.397 "raid_level": "raid1", 00:31:46.397 "superblock": false, 00:31:46.397 "num_base_bdevs": 2, 00:31:46.397 "num_base_bdevs_discovered": 2, 00:31:46.397 "num_base_bdevs_operational": 2, 00:31:46.397 "base_bdevs_list": [ 00:31:46.397 { 00:31:46.397 "name": "spare", 00:31:46.397 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:46.397 "is_configured": true, 00:31:46.397 "data_offset": 0, 00:31:46.397 "data_size": 65536 00:31:46.397 }, 00:31:46.397 { 00:31:46.397 "name": "BaseBdev2", 00:31:46.397 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:46.397 "is_configured": true, 00:31:46.397 "data_offset": 0, 00:31:46.397 "data_size": 65536 00:31:46.397 } 00:31:46.397 ] 00:31:46.397 }' 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@660 -- # break 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.397 02:01:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:46.972 "name": "raid_bdev1", 00:31:46.972 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:46.972 "strip_size_kb": 0, 00:31:46.972 "state": "online", 00:31:46.972 "raid_level": "raid1", 00:31:46.972 "superblock": false, 00:31:46.972 "num_base_bdevs": 2, 00:31:46.972 "num_base_bdevs_discovered": 2, 00:31:46.972 "num_base_bdevs_operational": 2, 00:31:46.972 "base_bdevs_list": [ 00:31:46.972 { 00:31:46.972 "name": "spare", 00:31:46.972 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:46.972 "is_configured": true, 00:31:46.972 "data_offset": 0, 00:31:46.972 "data_size": 65536 00:31:46.972 }, 00:31:46.972 { 00:31:46.972 "name": "BaseBdev2", 00:31:46.972 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:46.972 "is_configured": true, 00:31:46.972 "data_offset": 0, 00:31:46.972 "data_size": 65536 00:31:46.972 } 00:31:46.972 ] 00:31:46.972 }' 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.972 02:01:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.230 02:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:47.230 "name": "raid_bdev1", 00:31:47.230 "uuid": "8b6493b7-dd0c-49ce-89f9-1593241b98e1", 00:31:47.230 "strip_size_kb": 0, 00:31:47.230 "state": "online", 00:31:47.230 "raid_level": "raid1", 00:31:47.230 "superblock": false, 00:31:47.230 "num_base_bdevs": 2, 00:31:47.230 "num_base_bdevs_discovered": 2, 00:31:47.230 "num_base_bdevs_operational": 2, 00:31:47.230 "base_bdevs_list": [ 00:31:47.230 { 00:31:47.230 "name": "spare", 00:31:47.230 "uuid": "da050b83-6f09-5b7e-b88f-856bd4690194", 00:31:47.230 "is_configured": true, 00:31:47.230 "data_offset": 0, 00:31:47.230 "data_size": 65536 00:31:47.230 }, 00:31:47.230 { 00:31:47.230 "name": "BaseBdev2", 00:31:47.230 "uuid": "a2400da4-2447-421c-b443-7af81dbae749", 00:31:47.230 "is_configured": true, 00:31:47.230 "data_offset": 0, 00:31:47.230 "data_size": 65536 00:31:47.230 } 00:31:47.230 ] 00:31:47.230 }' 00:31:47.230 02:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:47.230 02:01:47 -- common/autotest_common.sh@10 -- # set +x 00:31:48.164 02:01:48 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:48.423 [2024-04-24 02:01:48.354218] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:48.423 [2024-04-24 02:01:48.354304] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:48.423 00:31:48.423 Latency(us) 00:31:48.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.423 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:48.423 raid_bdev1 : 11.56 110.25 330.74 0.00 0.00 12180.37 368.64 116841.33 00:31:48.423 =================================================================================================================== 00:31:48.423 Total : 110.25 330.74 0.00 0.00 12180.37 368.64 116841.33 00:31:48.423 [2024-04-24 02:01:48.483491] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.423 [2024-04-24 02:01:48.483580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.423 0 00:31:48.423 [2024-04-24 02:01:48.483692] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.423 [2024-04-24 02:01:48.483706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:31:48.680 02:01:48 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.680 02:01:48 -- bdev/bdev_raid.sh@671 -- # jq length 00:31:48.680 02:01:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:31:48.680 02:01:48 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:31:48.680 02:01:48 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@12 -- # local i 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.680 02:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:48.937 /dev/nbd0 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:48.937 02:01:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:48.937 02:01:48 -- common/autotest_common.sh@855 -- # local i 00:31:48.937 02:01:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:48.937 02:01:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:48.937 02:01:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:48.937 02:01:48 -- common/autotest_common.sh@859 -- # break 00:31:48.937 02:01:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:48.937 02:01:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:48.937 02:01:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.937 1+0 records in 00:31:48.937 1+0 records out 00:31:48.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521615 s, 7.9 MB/s 00:31:48.937 02:01:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.937 02:01:48 -- common/autotest_common.sh@872 -- # size=4096 00:31:48.937 02:01:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.937 02:01:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:48.937 02:01:48 -- common/autotest_common.sh@875 -- # return 0 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.937 02:01:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:31:48.937 02:01:48 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:31:48.937 02:01:48 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@12 -- # local i 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.937 02:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:49.504 /dev/nbd1 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:49.504 02:01:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:49.504 02:01:49 -- common/autotest_common.sh@855 -- # local i 00:31:49.504 02:01:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:49.504 02:01:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:49.504 02:01:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:49.504 02:01:49 -- common/autotest_common.sh@859 -- # break 00:31:49.504 02:01:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:49.504 02:01:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:49.504 02:01:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.504 1+0 records in 00:31:49.504 1+0 records out 00:31:49.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430224 s, 9.5 MB/s 00:31:49.504 02:01:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.504 02:01:49 -- common/autotest_common.sh@872 -- # size=4096 00:31:49.504 02:01:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.504 02:01:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:49.504 02:01:49 -- common/autotest_common.sh@875 -- # return 0 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:49.504 02:01:49 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:49.504 02:01:49 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@51 -- # local i 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:49.504 02:01:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@41 -- # break 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@45 -- # return 0 00:31:49.762 02:01:49 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@51 -- # local i 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:49.762 02:01:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@41 -- # break 00:31:50.023 02:01:50 -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.023 02:01:50 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:31:50.023 02:01:50 -- bdev/bdev_raid.sh@709 -- # killprocess 132346 00:31:50.023 02:01:50 -- common/autotest_common.sh@936 -- # '[' -z 132346 ']' 00:31:50.023 02:01:50 -- common/autotest_common.sh@940 -- # kill -0 132346 00:31:50.023 02:01:50 -- common/autotest_common.sh@941 -- # uname 00:31:50.023 02:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:50.284 02:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132346 00:31:50.284 02:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:50.284 02:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:50.284 02:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132346' 00:31:50.284 killing process with pid 132346 00:31:50.284 02:01:50 -- common/autotest_common.sh@955 -- # kill 132346 00:31:50.284 Received shutdown signal, test time was about 13.224199 seconds 00:31:50.284 00:31:50.284 Latency(us) 00:31:50.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.284 =================================================================================================================== 00:31:50.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.284 [2024-04-24 02:01:50.122277] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:50.284 02:01:50 -- common/autotest_common.sh@960 -- # wait 132346 00:31:50.542 [2024-04-24 02:01:50.385642] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@711 -- # return 0 00:31:51.917 00:31:51.917 real 0m18.854s 00:31:51.917 user 0m28.713s 00:31:51.917 sys 0m2.424s 00:31:51.917 02:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:51.917 02:01:51 -- common/autotest_common.sh@10 -- # set +x 00:31:51.917 ************************************ 00:31:51.917 END TEST raid_rebuild_test_io 00:31:51.917 ************************************ 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:31:51.917 02:01:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:31:51.917 02:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:51.917 02:01:51 -- common/autotest_common.sh@10 -- # set +x 00:31:51.917 ************************************ 00:31:51.917 START TEST raid_rebuild_test_sb_io 00:31:51.917 ************************************ 00:31:51.917 02:01:51 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true true 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@544 -- # raid_pid=132834 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132834 /var/tmp/spdk-raid.sock 00:31:51.917 02:01:51 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:51.917 02:01:51 -- common/autotest_common.sh@817 -- # '[' -z 132834 ']' 00:31:51.917 02:01:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:51.917 02:01:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:51.917 02:01:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:51.917 02:01:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:51.917 02:01:51 -- common/autotest_common.sh@10 -- # set +x 00:31:52.177 [2024-04-24 02:01:52.020935] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:31:52.177 [2024-04-24 02:01:52.021114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132834 ] 00:31:52.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:52.177 Zero copy mechanism will not be used. 00:31:52.177 [2024-04-24 02:01:52.197368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.455 [2024-04-24 02:01:52.411077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.718 [2024-04-24 02:01:52.650700] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:52.976 02:01:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:52.976 02:01:52 -- common/autotest_common.sh@850 -- # return 0 00:31:52.976 02:01:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:52.976 02:01:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:31:52.976 02:01:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:53.233 BaseBdev1_malloc 00:31:53.233 02:01:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:53.491 [2024-04-24 02:01:53.487737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:53.491 [2024-04-24 02:01:53.487852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.491 [2024-04-24 02:01:53.487893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:31:53.491 [2024-04-24 02:01:53.487946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.491 [2024-04-24 02:01:53.490647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.491 [2024-04-24 02:01:53.490703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:53.491 BaseBdev1 00:31:53.491 02:01:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:31:53.491 02:01:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:31:53.491 02:01:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:53.749 BaseBdev2_malloc 00:31:54.006 02:01:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:54.265 [2024-04-24 02:01:54.133791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:54.265 [2024-04-24 02:01:54.133885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:54.265 [2024-04-24 02:01:54.133931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:54.265 [2024-04-24 02:01:54.134022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:54.265 [2024-04-24 02:01:54.136624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:54.265 [2024-04-24 02:01:54.136680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:54.265 BaseBdev2 00:31:54.265 02:01:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:54.524 spare_malloc 00:31:54.524 02:01:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:54.781 spare_delay 00:31:54.781 02:01:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:55.060 [2024-04-24 02:01:54.931912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:55.060 [2024-04-24 02:01:54.931999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.060 [2024-04-24 02:01:54.932042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:31:55.060 [2024-04-24 02:01:54.932089] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.060 [2024-04-24 02:01:54.934558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.060 [2024-04-24 02:01:54.934614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:55.060 spare 00:31:55.060 02:01:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:55.317 [2024-04-24 02:01:55.212045] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:55.317 [2024-04-24 02:01:55.214180] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:55.317 [2024-04-24 02:01:55.214359] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:31:55.317 [2024-04-24 02:01:55.214371] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:55.317 [2024-04-24 02:01:55.214507] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:55.317 [2024-04-24 02:01:55.214838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:31:55.317 [2024-04-24 02:01:55.214849] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:31:55.317 [2024-04-24 02:01:55.215016] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.317 02:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.575 02:01:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:55.575 "name": "raid_bdev1", 00:31:55.575 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:31:55.575 "strip_size_kb": 0, 00:31:55.575 "state": "online", 00:31:55.575 "raid_level": "raid1", 00:31:55.575 "superblock": true, 00:31:55.575 "num_base_bdevs": 2, 00:31:55.575 "num_base_bdevs_discovered": 2, 00:31:55.575 "num_base_bdevs_operational": 2, 00:31:55.575 "base_bdevs_list": [ 00:31:55.575 { 00:31:55.575 "name": "BaseBdev1", 00:31:55.575 "uuid": "801ac4a5-9ff4-5695-9db3-b9ddfb31505e", 00:31:55.575 "is_configured": true, 00:31:55.575 "data_offset": 2048, 00:31:55.575 "data_size": 63488 00:31:55.575 }, 00:31:55.575 { 00:31:55.575 "name": "BaseBdev2", 00:31:55.575 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:31:55.575 "is_configured": true, 00:31:55.575 "data_offset": 2048, 00:31:55.575 "data_size": 63488 00:31:55.575 } 00:31:55.575 ] 00:31:55.575 }' 00:31:55.575 02:01:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:55.575 02:01:55 -- common/autotest_common.sh@10 -- # set +x 00:31:56.140 02:01:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:31:56.140 02:01:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:56.399 [2024-04-24 02:01:56.376492] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:56.399 02:01:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:31:56.399 02:01:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:56.399 02:01:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.657 02:01:56 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:31:56.657 02:01:56 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:31:56.657 02:01:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:56.657 02:01:56 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:56.916 [2024-04-24 02:01:56.805506] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:56.916 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:56.916 Zero copy mechanism will not be used. 00:31:56.916 Running I/O for 60 seconds... 00:31:56.916 [2024-04-24 02:01:56.966171] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:56.916 [2024-04-24 02:01:56.980395] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.174 02:01:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.433 02:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:57.433 "name": "raid_bdev1", 00:31:57.433 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:31:57.433 "strip_size_kb": 0, 00:31:57.433 "state": "online", 00:31:57.433 "raid_level": "raid1", 00:31:57.433 "superblock": true, 00:31:57.433 "num_base_bdevs": 2, 00:31:57.433 "num_base_bdevs_discovered": 1, 00:31:57.433 "num_base_bdevs_operational": 1, 00:31:57.433 "base_bdevs_list": [ 00:31:57.433 { 00:31:57.433 "name": null, 00:31:57.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.433 "is_configured": false, 00:31:57.433 "data_offset": 2048, 00:31:57.433 "data_size": 63488 00:31:57.433 }, 00:31:57.433 { 00:31:57.433 "name": "BaseBdev2", 00:31:57.433 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:31:57.433 "is_configured": true, 00:31:57.433 "data_offset": 2048, 00:31:57.433 "data_size": 63488 00:31:57.433 } 00:31:57.433 ] 00:31:57.433 }' 00:31:57.433 02:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:57.433 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:31:57.999 02:01:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:58.257 [2024-04-24 02:01:58.200149] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:31:58.257 [2024-04-24 02:01:58.200440] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.257 02:01:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:31:58.257 [2024-04-24 02:01:58.255367] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:58.257 [2024-04-24 02:01:58.257657] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.515 [2024-04-24 02:01:58.372391] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:58.515 [2024-04-24 02:01:58.509092] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:58.515 [2024-04-24 02:01:58.509601] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:59.081 [2024-04-24 02:01:58.994833] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:59.081 [2024-04-24 02:01:58.995321] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.339 02:01:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.339 [2024-04-24 02:01:59.358552] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:31:59.597 "name": "raid_bdev1", 00:31:59.597 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:31:59.597 "strip_size_kb": 0, 00:31:59.597 "state": "online", 00:31:59.597 "raid_level": "raid1", 00:31:59.597 "superblock": true, 00:31:59.597 "num_base_bdevs": 2, 00:31:59.597 "num_base_bdevs_discovered": 2, 00:31:59.597 "num_base_bdevs_operational": 2, 00:31:59.597 "process": { 00:31:59.597 "type": "rebuild", 00:31:59.597 "target": "spare", 00:31:59.597 "progress": { 00:31:59.597 "blocks": 14336, 00:31:59.597 "percent": 22 00:31:59.597 } 00:31:59.597 }, 00:31:59.597 "base_bdevs_list": [ 00:31:59.597 { 00:31:59.597 "name": "spare", 00:31:59.597 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:31:59.597 "is_configured": true, 00:31:59.597 "data_offset": 2048, 00:31:59.597 "data_size": 63488 00:31:59.597 }, 00:31:59.597 { 00:31:59.597 "name": "BaseBdev2", 00:31:59.597 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:31:59.597 "is_configured": true, 00:31:59.597 "data_offset": 2048, 00:31:59.597 "data_size": 63488 00:31:59.597 } 00:31:59.597 ] 00:31:59.597 }' 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:31:59.597 [2024-04-24 02:01:59.589186] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:31:59.597 02:01:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:59.855 [2024-04-24 02:01:59.807436] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:59.855 [2024-04-24 02:01:59.880453] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:00.113 [2024-04-24 02:01:59.943176] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:32:00.113 [2024-04-24 02:02:00.051844] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:00.113 [2024-04-24 02:02:00.068293] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.113 [2024-04-24 02:02:00.108738] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.114 02:02:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.372 02:02:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:00.373 "name": "raid_bdev1", 00:32:00.373 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:00.373 "strip_size_kb": 0, 00:32:00.373 "state": "online", 00:32:00.373 "raid_level": "raid1", 00:32:00.373 "superblock": true, 00:32:00.373 "num_base_bdevs": 2, 00:32:00.373 "num_base_bdevs_discovered": 1, 00:32:00.373 "num_base_bdevs_operational": 1, 00:32:00.373 "base_bdevs_list": [ 00:32:00.373 { 00:32:00.373 "name": null, 00:32:00.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.373 "is_configured": false, 00:32:00.373 "data_offset": 2048, 00:32:00.373 "data_size": 63488 00:32:00.373 }, 00:32:00.373 { 00:32:00.373 "name": "BaseBdev2", 00:32:00.373 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:00.373 "is_configured": true, 00:32:00.373 "data_offset": 2048, 00:32:00.373 "data_size": 63488 00:32:00.373 } 00:32:00.373 ] 00:32:00.373 }' 00:32:00.373 02:02:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:00.373 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.306 02:02:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:01.564 "name": "raid_bdev1", 00:32:01.564 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:01.564 "strip_size_kb": 0, 00:32:01.564 "state": "online", 00:32:01.564 "raid_level": "raid1", 00:32:01.564 "superblock": true, 00:32:01.564 "num_base_bdevs": 2, 00:32:01.564 "num_base_bdevs_discovered": 1, 00:32:01.564 "num_base_bdevs_operational": 1, 00:32:01.564 "base_bdevs_list": [ 00:32:01.564 { 00:32:01.564 "name": null, 00:32:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.564 "is_configured": false, 00:32:01.564 "data_offset": 2048, 00:32:01.564 "data_size": 63488 00:32:01.564 }, 00:32:01.564 { 00:32:01.564 "name": "BaseBdev2", 00:32:01.564 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:01.564 "is_configured": true, 00:32:01.564 "data_offset": 2048, 00:32:01.564 "data_size": 63488 00:32:01.564 } 00:32:01.564 ] 00:32:01.564 }' 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:01.564 02:02:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:01.824 [2024-04-24 02:02:01.674497] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:32:01.824 [2024-04-24 02:02:01.674763] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:01.824 02:02:01 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:32:01.824 [2024-04-24 02:02:01.729764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:01.824 [2024-04-24 02:02:01.732717] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:01.824 [2024-04-24 02:02:01.857036] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:02.082 [2024-04-24 02:02:02.075298] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:02.082 [2024-04-24 02:02:02.075798] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:02.341 [2024-04-24 02:02:02.299942] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:02.599 [2024-04-24 02:02:02.432335] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:02.599 [2024-04-24 02:02:02.432924] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.858 02:02:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.858 [2024-04-24 02:02:02.768737] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:32:03.116 02:02:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:03.116 "name": "raid_bdev1", 00:32:03.116 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:03.116 "strip_size_kb": 0, 00:32:03.116 "state": "online", 00:32:03.116 "raid_level": "raid1", 00:32:03.116 "superblock": true, 00:32:03.116 "num_base_bdevs": 2, 00:32:03.116 "num_base_bdevs_discovered": 2, 00:32:03.116 "num_base_bdevs_operational": 2, 00:32:03.116 "process": { 00:32:03.116 "type": "rebuild", 00:32:03.116 "target": "spare", 00:32:03.116 "progress": { 00:32:03.116 "blocks": 14336, 00:32:03.116 "percent": 22 00:32:03.116 } 00:32:03.116 }, 00:32:03.116 "base_bdevs_list": [ 00:32:03.116 { 00:32:03.116 "name": "spare", 00:32:03.116 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:03.116 "is_configured": true, 00:32:03.116 "data_offset": 2048, 00:32:03.116 "data_size": 63488 00:32:03.116 }, 00:32:03.116 { 00:32:03.116 "name": "BaseBdev2", 00:32:03.116 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:03.116 "is_configured": true, 00:32:03.116 "data_offset": 2048, 00:32:03.116 "data_size": 63488 00:32:03.116 } 00:32:03.116 ] 00:32:03.116 }' 00:32:03.116 02:02:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:03.116 [2024-04-24 02:02:02.985831] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:03.116 [2024-04-24 02:02:02.986514] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:03.116 02:02:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:03.116 02:02:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:03.116 02:02:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:32:03.117 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@657 -- # local timeout=498 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.117 02:02:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.374 02:02:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:03.374 "name": "raid_bdev1", 00:32:03.374 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:03.374 "strip_size_kb": 0, 00:32:03.374 "state": "online", 00:32:03.374 "raid_level": "raid1", 00:32:03.374 "superblock": true, 00:32:03.374 "num_base_bdevs": 2, 00:32:03.374 "num_base_bdevs_discovered": 2, 00:32:03.374 "num_base_bdevs_operational": 2, 00:32:03.374 "process": { 00:32:03.374 "type": "rebuild", 00:32:03.374 "target": "spare", 00:32:03.374 "progress": { 00:32:03.375 "blocks": 20480, 00:32:03.375 "percent": 32 00:32:03.375 } 00:32:03.375 }, 00:32:03.375 "base_bdevs_list": [ 00:32:03.375 { 00:32:03.375 "name": "spare", 00:32:03.375 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:03.375 "is_configured": true, 00:32:03.375 "data_offset": 2048, 00:32:03.375 "data_size": 63488 00:32:03.375 }, 00:32:03.375 { 00:32:03.375 "name": "BaseBdev2", 00:32:03.375 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:03.375 "is_configured": true, 00:32:03.375 "data_offset": 2048, 00:32:03.375 "data_size": 63488 00:32:03.375 } 00:32:03.375 ] 00:32:03.375 }' 00:32:03.375 02:02:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:03.375 02:02:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:03.375 02:02:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:03.375 02:02:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:03.375 02:02:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:03.633 [2024-04-24 02:02:03.690739] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:32:04.198 [2024-04-24 02:02:04.040204] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:04.456 [2024-04-24 02:02:04.372810] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.456 02:02:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.456 [2024-04-24 02:02:04.493766] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:32:04.456 [2024-04-24 02:02:04.494230] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:04.715 "name": "raid_bdev1", 00:32:04.715 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:04.715 "strip_size_kb": 0, 00:32:04.715 "state": "online", 00:32:04.715 "raid_level": "raid1", 00:32:04.715 "superblock": true, 00:32:04.715 "num_base_bdevs": 2, 00:32:04.715 "num_base_bdevs_discovered": 2, 00:32:04.715 "num_base_bdevs_operational": 2, 00:32:04.715 "process": { 00:32:04.715 "type": "rebuild", 00:32:04.715 "target": "spare", 00:32:04.715 "progress": { 00:32:04.715 "blocks": 40960, 00:32:04.715 "percent": 64 00:32:04.715 } 00:32:04.715 }, 00:32:04.715 "base_bdevs_list": [ 00:32:04.715 { 00:32:04.715 "name": "spare", 00:32:04.715 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:04.715 "is_configured": true, 00:32:04.715 "data_offset": 2048, 00:32:04.715 "data_size": 63488 00:32:04.715 }, 00:32:04.715 { 00:32:04.715 "name": "BaseBdev2", 00:32:04.715 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:04.715 "is_configured": true, 00:32:04.715 "data_offset": 2048, 00:32:04.715 "data_size": 63488 00:32:04.715 } 00:32:04.715 ] 00:32:04.715 }' 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.715 02:02:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:04.974 [2024-04-24 02:02:04.820586] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:32:04.974 [2024-04-24 02:02:05.036624] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:32:05.540 [2024-04-24 02:02:05.382392] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:32:05.540 [2024-04-24 02:02:05.605105] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.799 02:02:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.057 02:02:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:06.057 "name": "raid_bdev1", 00:32:06.057 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:06.057 "strip_size_kb": 0, 00:32:06.057 "state": "online", 00:32:06.057 "raid_level": "raid1", 00:32:06.057 "superblock": true, 00:32:06.057 "num_base_bdevs": 2, 00:32:06.057 "num_base_bdevs_discovered": 2, 00:32:06.057 "num_base_bdevs_operational": 2, 00:32:06.057 "process": { 00:32:06.057 "type": "rebuild", 00:32:06.057 "target": "spare", 00:32:06.057 "progress": { 00:32:06.057 "blocks": 57344, 00:32:06.057 "percent": 90 00:32:06.057 } 00:32:06.057 }, 00:32:06.057 "base_bdevs_list": [ 00:32:06.057 { 00:32:06.057 "name": "spare", 00:32:06.057 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:06.057 "is_configured": true, 00:32:06.057 "data_offset": 2048, 00:32:06.057 "data_size": 63488 00:32:06.057 }, 00:32:06.057 { 00:32:06.057 "name": "BaseBdev2", 00:32:06.057 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:06.057 "is_configured": true, 00:32:06.057 "data_offset": 2048, 00:32:06.057 "data_size": 63488 00:32:06.057 } 00:32:06.057 ] 00:32:06.057 }' 00:32:06.057 02:02:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:06.057 02:02:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.057 02:02:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:06.057 02:02:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.057 02:02:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:06.315 [2024-04-24 02:02:06.156435] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:06.315 [2024-04-24 02:02:06.256480] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:06.315 [2024-04-24 02:02:06.265830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:07.249 "name": "raid_bdev1", 00:32:07.249 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:07.249 "strip_size_kb": 0, 00:32:07.249 "state": "online", 00:32:07.249 "raid_level": "raid1", 00:32:07.249 "superblock": true, 00:32:07.249 "num_base_bdevs": 2, 00:32:07.249 "num_base_bdevs_discovered": 2, 00:32:07.249 "num_base_bdevs_operational": 2, 00:32:07.249 "base_bdevs_list": [ 00:32:07.249 { 00:32:07.249 "name": "spare", 00:32:07.249 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:07.249 "is_configured": true, 00:32:07.249 "data_offset": 2048, 00:32:07.249 "data_size": 63488 00:32:07.249 }, 00:32:07.249 { 00:32:07.249 "name": "BaseBdev2", 00:32:07.249 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:07.249 "is_configured": true, 00:32:07.249 "data_offset": 2048, 00:32:07.249 "data_size": 63488 00:32:07.249 } 00:32:07.249 ] 00:32:07.249 }' 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:07.249 02:02:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@660 -- # break 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.507 02:02:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:07.764 "name": "raid_bdev1", 00:32:07.764 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:07.764 "strip_size_kb": 0, 00:32:07.764 "state": "online", 00:32:07.764 "raid_level": "raid1", 00:32:07.764 "superblock": true, 00:32:07.764 "num_base_bdevs": 2, 00:32:07.764 "num_base_bdevs_discovered": 2, 00:32:07.764 "num_base_bdevs_operational": 2, 00:32:07.764 "base_bdevs_list": [ 00:32:07.764 { 00:32:07.764 "name": "spare", 00:32:07.764 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:07.764 "is_configured": true, 00:32:07.764 "data_offset": 2048, 00:32:07.764 "data_size": 63488 00:32:07.764 }, 00:32:07.764 { 00:32:07.764 "name": "BaseBdev2", 00:32:07.764 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:07.764 "is_configured": true, 00:32:07.764 "data_offset": 2048, 00:32:07.764 "data_size": 63488 00:32:07.764 } 00:32:07.764 ] 00:32:07.764 }' 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.764 02:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.022 02:02:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:08.022 "name": "raid_bdev1", 00:32:08.022 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:08.022 "strip_size_kb": 0, 00:32:08.022 "state": "online", 00:32:08.022 "raid_level": "raid1", 00:32:08.022 "superblock": true, 00:32:08.022 "num_base_bdevs": 2, 00:32:08.022 "num_base_bdevs_discovered": 2, 00:32:08.022 "num_base_bdevs_operational": 2, 00:32:08.022 "base_bdevs_list": [ 00:32:08.022 { 00:32:08.022 "name": "spare", 00:32:08.022 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:08.022 "is_configured": true, 00:32:08.022 "data_offset": 2048, 00:32:08.022 "data_size": 63488 00:32:08.022 }, 00:32:08.022 { 00:32:08.022 "name": "BaseBdev2", 00:32:08.022 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:08.022 "is_configured": true, 00:32:08.022 "data_offset": 2048, 00:32:08.022 "data_size": 63488 00:32:08.022 } 00:32:08.022 ] 00:32:08.022 }' 00:32:08.022 02:02:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:08.022 02:02:07 -- common/autotest_common.sh@10 -- # set +x 00:32:08.589 02:02:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:08.847 [2024-04-24 02:02:08.764876] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:08.847 [2024-04-24 02:02:08.765141] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:08.847 00:32:08.847 Latency(us) 00:32:08.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.847 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:32:08.847 raid_bdev1 : 11.97 109.04 327.12 0.00 0.00 12673.04 335.48 116342.00 00:32:08.847 =================================================================================================================== 00:32:08.847 Total : 109.04 327.12 0.00 0.00 12673.04 335.48 116342.00 00:32:08.847 [2024-04-24 02:02:08.805057] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.847 [2024-04-24 02:02:08.805255] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:08.847 [2024-04-24 02:02:08.805380] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:08.847 0 00:32:08.847 [2024-04-24 02:02:08.805675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:32:08.847 02:02:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.847 02:02:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:32:09.104 02:02:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:32:09.104 02:02:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:32:09.104 02:02:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@12 -- # local i 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.104 02:02:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:32:09.362 /dev/nbd0 00:32:09.619 02:02:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:09.619 02:02:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:09.619 02:02:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:32:09.619 02:02:09 -- common/autotest_common.sh@855 -- # local i 00:32:09.619 02:02:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:09.619 02:02:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:09.619 02:02:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:32:09.619 02:02:09 -- common/autotest_common.sh@859 -- # break 00:32:09.619 02:02:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:09.619 02:02:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:09.620 02:02:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.620 1+0 records in 00:32:09.620 1+0 records out 00:32:09.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528729 s, 7.7 MB/s 00:32:09.620 02:02:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.620 02:02:09 -- common/autotest_common.sh@872 -- # size=4096 00:32:09.620 02:02:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.620 02:02:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:09.620 02:02:09 -- common/autotest_common.sh@875 -- # return 0 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.620 02:02:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:32:09.620 02:02:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:32:09.620 02:02:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@12 -- # local i 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.620 02:02:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:32:09.877 /dev/nbd1 00:32:09.877 02:02:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:09.877 02:02:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:09.877 02:02:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:32:09.877 02:02:09 -- common/autotest_common.sh@855 -- # local i 00:32:09.877 02:02:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:09.877 02:02:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:09.877 02:02:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:32:09.877 02:02:09 -- common/autotest_common.sh@859 -- # break 00:32:09.877 02:02:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:09.877 02:02:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:09.877 02:02:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.877 1+0 records in 00:32:09.877 1+0 records out 00:32:09.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495065 s, 8.3 MB/s 00:32:09.877 02:02:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.877 02:02:09 -- common/autotest_common.sh@872 -- # size=4096 00:32:09.877 02:02:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.877 02:02:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:09.877 02:02:09 -- common/autotest_common.sh@875 -- # return 0 00:32:09.877 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.877 02:02:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.877 02:02:09 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:10.135 02:02:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@51 -- # local i 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.135 02:02:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@41 -- # break 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.393 02:02:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@51 -- # local i 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.393 02:02:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@41 -- # break 00:32:10.651 02:02:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.651 02:02:10 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:32:10.651 02:02:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:32:10.651 02:02:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:32:10.651 02:02:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:10.909 02:02:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:11.167 [2024-04-24 02:02:11.137423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:11.167 [2024-04-24 02:02:11.137757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.167 [2024-04-24 02:02:11.137934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:32:11.167 [2024-04-24 02:02:11.138090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.167 [2024-04-24 02:02:11.141402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.167 [2024-04-24 02:02:11.141682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:11.167 [2024-04-24 02:02:11.142046] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:11.167 [2024-04-24 02:02:11.142335] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:11.167 BaseBdev1 00:32:11.167 02:02:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:32:11.167 02:02:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:32:11.167 02:02:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:32:11.426 02:02:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:11.684 [2024-04-24 02:02:11.674338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:11.684 [2024-04-24 02:02:11.674667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.684 [2024-04-24 02:02:11.674756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:11.684 [2024-04-24 02:02:11.675000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.684 [2024-04-24 02:02:11.675585] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.684 [2024-04-24 02:02:11.675788] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:11.684 [2024-04-24 02:02:11.676051] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:32:11.684 [2024-04-24 02:02:11.676195] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:32:11.684 [2024-04-24 02:02:11.676294] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:11.684 [2024-04-24 02:02:11.676354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:32:11.684 [2024-04-24 02:02:11.676514] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:11.684 BaseBdev2 00:32:11.684 02:02:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:11.942 02:02:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:12.201 [2024-04-24 02:02:12.258489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:12.201 [2024-04-24 02:02:12.258823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.201 [2024-04-24 02:02:12.258918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:12.201 [2024-04-24 02:02:12.259183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.201 [2024-04-24 02:02:12.259817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.201 [2024-04-24 02:02:12.260004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:12.201 [2024-04-24 02:02:12.260281] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:32:12.201 [2024-04-24 02:02:12.260424] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.201 spare 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:12.201 02:02:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:12.459 02:02:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.459 02:02:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.459 [2024-04-24 02:02:12.360594] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:32:12.459 [2024-04-24 02:02:12.360846] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:12.459 [2024-04-24 02:02:12.361113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:32:12.459 [2024-04-24 02:02:12.361749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:32:12.459 [2024-04-24 02:02:12.361912] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:32:12.459 [2024-04-24 02:02:12.362225] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.717 02:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:12.717 "name": "raid_bdev1", 00:32:12.717 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:12.717 "strip_size_kb": 0, 00:32:12.717 "state": "online", 00:32:12.717 "raid_level": "raid1", 00:32:12.717 "superblock": true, 00:32:12.717 "num_base_bdevs": 2, 00:32:12.717 "num_base_bdevs_discovered": 2, 00:32:12.717 "num_base_bdevs_operational": 2, 00:32:12.717 "base_bdevs_list": [ 00:32:12.717 { 00:32:12.717 "name": "spare", 00:32:12.717 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:12.717 "is_configured": true, 00:32:12.717 "data_offset": 2048, 00:32:12.717 "data_size": 63488 00:32:12.717 }, 00:32:12.717 { 00:32:12.717 "name": "BaseBdev2", 00:32:12.717 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:12.717 "is_configured": true, 00:32:12.717 "data_offset": 2048, 00:32:12.717 "data_size": 63488 00:32:12.717 } 00:32:12.717 ] 00:32:12.717 }' 00:32:12.717 02:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:12.717 02:02:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.283 02:02:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.541 02:02:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:13.541 "name": "raid_bdev1", 00:32:13.541 "uuid": "3b087d87-4e75-4853-aa08-4952258fba40", 00:32:13.541 "strip_size_kb": 0, 00:32:13.541 "state": "online", 00:32:13.541 "raid_level": "raid1", 00:32:13.541 "superblock": true, 00:32:13.541 "num_base_bdevs": 2, 00:32:13.541 "num_base_bdevs_discovered": 2, 00:32:13.541 "num_base_bdevs_operational": 2, 00:32:13.541 "base_bdevs_list": [ 00:32:13.541 { 00:32:13.541 "name": "spare", 00:32:13.541 "uuid": "5202bcee-dbba-5f9b-b7ed-529494de335c", 00:32:13.541 "is_configured": true, 00:32:13.541 "data_offset": 2048, 00:32:13.541 "data_size": 63488 00:32:13.541 }, 00:32:13.541 { 00:32:13.541 "name": "BaseBdev2", 00:32:13.541 "uuid": "f7075a11-c189-5d4a-bd6e-11e80803a9d6", 00:32:13.541 "is_configured": true, 00:32:13.541 "data_offset": 2048, 00:32:13.541 "data_size": 63488 00:32:13.541 } 00:32:13.542 ] 00:32:13.542 }' 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.542 02:02:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:14.107 02:02:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.107 02:02:13 -- bdev/bdev_raid.sh@709 -- # killprocess 132834 00:32:14.107 02:02:13 -- common/autotest_common.sh@936 -- # '[' -z 132834 ']' 00:32:14.107 02:02:13 -- common/autotest_common.sh@940 -- # kill -0 132834 00:32:14.107 02:02:13 -- common/autotest_common.sh@941 -- # uname 00:32:14.107 02:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:14.107 02:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132834 00:32:14.107 02:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:14.107 02:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:14.107 02:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132834' 00:32:14.107 killing process with pid 132834 00:32:14.107 02:02:13 -- common/autotest_common.sh@955 -- # kill 132834 00:32:14.107 Received shutdown signal, test time was about 17.123267 seconds 00:32:14.107 00:32:14.107 Latency(us) 00:32:14.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.107 =================================================================================================================== 00:32:14.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.107 02:02:13 -- common/autotest_common.sh@960 -- # wait 132834 00:32:14.107 [2024-04-24 02:02:13.932004] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:14.107 [2024-04-24 02:02:13.932104] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:14.107 [2024-04-24 02:02:13.932185] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:14.107 [2024-04-24 02:02:13.932196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:32:14.404 [2024-04-24 02:02:14.207193] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:15.779 02:02:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:32:15.779 00:32:15.779 real 0m23.897s 00:32:15.779 user 0m37.300s 00:32:15.779 sys 0m3.039s 00:32:15.779 ************************************ 00:32:15.779 END TEST raid_rebuild_test_sb_io 00:32:15.779 ************************************ 00:32:15.779 02:02:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:15.779 02:02:15 -- common/autotest_common.sh@10 -- # set +x 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:32:16.061 02:02:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:16.061 02:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:16.061 02:02:15 -- common/autotest_common.sh@10 -- # set +x 00:32:16.061 ************************************ 00:32:16.061 START TEST raid_rebuild_test 00:32:16.061 ************************************ 00:32:16.061 02:02:15 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false false 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=133429 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133429 /var/tmp/spdk-raid.sock 00:32:16.061 02:02:15 -- common/autotest_common.sh@817 -- # '[' -z 133429 ']' 00:32:16.061 02:02:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:16.061 02:02:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:16.061 02:02:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:16.061 02:02:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:16.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:16.061 02:02:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:16.061 02:02:15 -- common/autotest_common.sh@10 -- # set +x 00:32:16.061 [2024-04-24 02:02:16.030511] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:32:16.061 [2024-04-24 02:02:16.031081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133429 ] 00:32:16.061 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:16.061 Zero copy mechanism will not be used. 00:32:16.337 [2024-04-24 02:02:16.200999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.670 [2024-04-24 02:02:16.497915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.950 [2024-04-24 02:02:16.767135] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:17.209 02:02:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:17.209 02:02:17 -- common/autotest_common.sh@850 -- # return 0 00:32:17.209 02:02:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:17.209 02:02:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:32:17.209 02:02:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:17.469 BaseBdev1 00:32:17.469 02:02:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:17.469 02:02:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:32:17.469 02:02:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:17.727 BaseBdev2 00:32:17.727 02:02:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:17.727 02:02:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:32:17.727 02:02:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:17.985 BaseBdev3 00:32:17.986 02:02:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:17.986 02:02:18 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:32:17.986 02:02:18 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:18.243 BaseBdev4 00:32:18.501 02:02:18 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:18.760 spare_malloc 00:32:18.760 02:02:18 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:19.019 spare_delay 00:32:19.019 02:02:18 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:19.019 [2024-04-24 02:02:19.096919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:19.019 [2024-04-24 02:02:19.097654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.019 [2024-04-24 02:02:19.097813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:19.019 [2024-04-24 02:02:19.097941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.019 [2024-04-24 02:02:19.100722] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.019 [2024-04-24 02:02:19.100909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:19.019 spare 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:19.277 [2024-04-24 02:02:19.337290] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:19.277 [2024-04-24 02:02:19.339801] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:19.277 [2024-04-24 02:02:19.340032] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:19.277 [2024-04-24 02:02:19.340106] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:19.277 [2024-04-24 02:02:19.340356] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:32:19.277 [2024-04-24 02:02:19.340479] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:19.277 [2024-04-24 02:02:19.340692] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:32:19.277 [2024-04-24 02:02:19.341146] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:32:19.277 [2024-04-24 02:02:19.341266] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:32:19.277 [2024-04-24 02:02:19.341606] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:19.277 02:02:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:19.536 02:02:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.536 02:02:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.536 02:02:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:19.536 "name": "raid_bdev1", 00:32:19.536 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:19.536 "strip_size_kb": 0, 00:32:19.536 "state": "online", 00:32:19.536 "raid_level": "raid1", 00:32:19.536 "superblock": false, 00:32:19.536 "num_base_bdevs": 4, 00:32:19.536 "num_base_bdevs_discovered": 4, 00:32:19.536 "num_base_bdevs_operational": 4, 00:32:19.536 "base_bdevs_list": [ 00:32:19.536 { 00:32:19.536 "name": "BaseBdev1", 00:32:19.536 "uuid": "4041b417-8961-4862-8748-f788bf5c56e4", 00:32:19.536 "is_configured": true, 00:32:19.536 "data_offset": 0, 00:32:19.536 "data_size": 65536 00:32:19.536 }, 00:32:19.536 { 00:32:19.536 "name": "BaseBdev2", 00:32:19.536 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:19.536 "is_configured": true, 00:32:19.536 "data_offset": 0, 00:32:19.536 "data_size": 65536 00:32:19.536 }, 00:32:19.536 { 00:32:19.536 "name": "BaseBdev3", 00:32:19.536 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:19.536 "is_configured": true, 00:32:19.536 "data_offset": 0, 00:32:19.536 "data_size": 65536 00:32:19.536 }, 00:32:19.536 { 00:32:19.536 "name": "BaseBdev4", 00:32:19.536 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:19.536 "is_configured": true, 00:32:19.536 "data_offset": 0, 00:32:19.536 "data_size": 65536 00:32:19.536 } 00:32:19.536 ] 00:32:19.536 }' 00:32:19.536 02:02:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:19.536 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:32:20.101 02:02:20 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:20.101 02:02:20 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:32:20.359 [2024-04-24 02:02:20.314146] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:20.359 02:02:20 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:32:20.359 02:02:20 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:20.359 02:02:20 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.618 02:02:20 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:32:20.618 02:02:20 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:32:20.618 02:02:20 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:32:20.618 02:02:20 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@12 -- # local i 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:20.618 02:02:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:20.876 [2024-04-24 02:02:20.813921] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:32:20.876 /dev/nbd0 00:32:20.876 02:02:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:20.876 02:02:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:20.876 02:02:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:32:20.876 02:02:20 -- common/autotest_common.sh@855 -- # local i 00:32:20.876 02:02:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:20.876 02:02:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:20.876 02:02:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:32:20.876 02:02:20 -- common/autotest_common.sh@859 -- # break 00:32:20.876 02:02:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:20.876 02:02:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:20.876 02:02:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:20.876 1+0 records in 00:32:20.876 1+0 records out 00:32:20.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483838 s, 8.5 MB/s 00:32:20.876 02:02:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:20.876 02:02:20 -- common/autotest_common.sh@872 -- # size=4096 00:32:20.876 02:02:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:20.876 02:02:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:20.876 02:02:20 -- common/autotest_common.sh@875 -- # return 0 00:32:20.876 02:02:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:20.876 02:02:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:20.876 02:02:20 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:32:20.876 02:02:20 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:32:20.876 02:02:20 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:32:27.429 65536+0 records in 00:32:27.429 65536+0 records out 00:32:27.429 33554432 bytes (34 MB, 32 MiB) copied, 6.39072 s, 5.3 MB/s 00:32:27.429 02:02:27 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@51 -- # local i 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:27.429 02:02:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:27.688 [2024-04-24 02:02:27.575415] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@41 -- # break 00:32:27.688 02:02:27 -- bdev/nbd_common.sh@45 -- # return 0 00:32:27.688 02:02:27 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:27.946 [2024-04-24 02:02:27.853772] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.946 02:02:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.207 02:02:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:28.207 "name": "raid_bdev1", 00:32:28.207 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:28.207 "strip_size_kb": 0, 00:32:28.207 "state": "online", 00:32:28.207 "raid_level": "raid1", 00:32:28.207 "superblock": false, 00:32:28.207 "num_base_bdevs": 4, 00:32:28.207 "num_base_bdevs_discovered": 3, 00:32:28.207 "num_base_bdevs_operational": 3, 00:32:28.207 "base_bdevs_list": [ 00:32:28.207 { 00:32:28.208 "name": null, 00:32:28.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.208 "is_configured": false, 00:32:28.208 "data_offset": 0, 00:32:28.208 "data_size": 65536 00:32:28.208 }, 00:32:28.208 { 00:32:28.208 "name": "BaseBdev2", 00:32:28.208 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:28.208 "is_configured": true, 00:32:28.208 "data_offset": 0, 00:32:28.208 "data_size": 65536 00:32:28.208 }, 00:32:28.208 { 00:32:28.208 "name": "BaseBdev3", 00:32:28.208 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:28.208 "is_configured": true, 00:32:28.208 "data_offset": 0, 00:32:28.208 "data_size": 65536 00:32:28.208 }, 00:32:28.208 { 00:32:28.208 "name": "BaseBdev4", 00:32:28.208 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:28.208 "is_configured": true, 00:32:28.208 "data_offset": 0, 00:32:28.208 "data_size": 65536 00:32:28.208 } 00:32:28.208 ] 00:32:28.208 }' 00:32:28.208 02:02:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:28.208 02:02:28 -- common/autotest_common.sh@10 -- # set +x 00:32:28.777 02:02:28 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:29.035 [2024-04-24 02:02:28.986481] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:32:29.035 [2024-04-24 02:02:28.986754] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:29.035 [2024-04-24 02:02:29.005990] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:32:29.035 [2024-04-24 02:02:29.008498] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:29.035 02:02:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.969 02:02:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.227 02:02:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:30.227 "name": "raid_bdev1", 00:32:30.227 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:30.227 "strip_size_kb": 0, 00:32:30.227 "state": "online", 00:32:30.227 "raid_level": "raid1", 00:32:30.227 "superblock": false, 00:32:30.227 "num_base_bdevs": 4, 00:32:30.227 "num_base_bdevs_discovered": 4, 00:32:30.227 "num_base_bdevs_operational": 4, 00:32:30.227 "process": { 00:32:30.227 "type": "rebuild", 00:32:30.227 "target": "spare", 00:32:30.227 "progress": { 00:32:30.227 "blocks": 24576, 00:32:30.227 "percent": 37 00:32:30.227 } 00:32:30.227 }, 00:32:30.227 "base_bdevs_list": [ 00:32:30.227 { 00:32:30.227 "name": "spare", 00:32:30.227 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:30.227 "is_configured": true, 00:32:30.227 "data_offset": 0, 00:32:30.227 "data_size": 65536 00:32:30.227 }, 00:32:30.227 { 00:32:30.227 "name": "BaseBdev2", 00:32:30.227 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:30.227 "is_configured": true, 00:32:30.227 "data_offset": 0, 00:32:30.227 "data_size": 65536 00:32:30.227 }, 00:32:30.227 { 00:32:30.227 "name": "BaseBdev3", 00:32:30.227 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:30.227 "is_configured": true, 00:32:30.227 "data_offset": 0, 00:32:30.227 "data_size": 65536 00:32:30.227 }, 00:32:30.227 { 00:32:30.227 "name": "BaseBdev4", 00:32:30.227 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:30.227 "is_configured": true, 00:32:30.227 "data_offset": 0, 00:32:30.227 "data_size": 65536 00:32:30.227 } 00:32:30.227 ] 00:32:30.227 }' 00:32:30.227 02:02:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:30.485 02:02:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:30.485 02:02:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:30.485 02:02:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:30.485 02:02:30 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:30.743 [2024-04-24 02:02:30.678294] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:30.743 [2024-04-24 02:02:30.719383] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:30.743 [2024-04-24 02:02:30.719741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.743 02:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.001 02:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:31.001 "name": "raid_bdev1", 00:32:31.001 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:31.001 "strip_size_kb": 0, 00:32:31.001 "state": "online", 00:32:31.001 "raid_level": "raid1", 00:32:31.001 "superblock": false, 00:32:31.001 "num_base_bdevs": 4, 00:32:31.001 "num_base_bdevs_discovered": 3, 00:32:31.001 "num_base_bdevs_operational": 3, 00:32:31.001 "base_bdevs_list": [ 00:32:31.001 { 00:32:31.001 "name": null, 00:32:31.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.001 "is_configured": false, 00:32:31.001 "data_offset": 0, 00:32:31.001 "data_size": 65536 00:32:31.001 }, 00:32:31.001 { 00:32:31.001 "name": "BaseBdev2", 00:32:31.001 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:31.001 "is_configured": true, 00:32:31.001 "data_offset": 0, 00:32:31.001 "data_size": 65536 00:32:31.001 }, 00:32:31.001 { 00:32:31.001 "name": "BaseBdev3", 00:32:31.001 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:31.001 "is_configured": true, 00:32:31.001 "data_offset": 0, 00:32:31.001 "data_size": 65536 00:32:31.001 }, 00:32:31.001 { 00:32:31.001 "name": "BaseBdev4", 00:32:31.001 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:31.001 "is_configured": true, 00:32:31.001 "data_offset": 0, 00:32:31.001 "data_size": 65536 00:32:31.001 } 00:32:31.001 ] 00:32:31.001 }' 00:32:31.001 02:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:31.001 02:02:31 -- common/autotest_common.sh@10 -- # set +x 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:31.954 "name": "raid_bdev1", 00:32:31.954 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:31.954 "strip_size_kb": 0, 00:32:31.954 "state": "online", 00:32:31.954 "raid_level": "raid1", 00:32:31.954 "superblock": false, 00:32:31.954 "num_base_bdevs": 4, 00:32:31.954 "num_base_bdevs_discovered": 3, 00:32:31.954 "num_base_bdevs_operational": 3, 00:32:31.954 "base_bdevs_list": [ 00:32:31.954 { 00:32:31.954 "name": null, 00:32:31.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.954 "is_configured": false, 00:32:31.954 "data_offset": 0, 00:32:31.954 "data_size": 65536 00:32:31.954 }, 00:32:31.954 { 00:32:31.954 "name": "BaseBdev2", 00:32:31.954 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:31.954 "is_configured": true, 00:32:31.954 "data_offset": 0, 00:32:31.954 "data_size": 65536 00:32:31.954 }, 00:32:31.954 { 00:32:31.954 "name": "BaseBdev3", 00:32:31.954 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:31.954 "is_configured": true, 00:32:31.954 "data_offset": 0, 00:32:31.954 "data_size": 65536 00:32:31.954 }, 00:32:31.954 { 00:32:31.954 "name": "BaseBdev4", 00:32:31.954 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:31.954 "is_configured": true, 00:32:31.954 "data_offset": 0, 00:32:31.954 "data_size": 65536 00:32:31.954 } 00:32:31.954 ] 00:32:31.954 }' 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:31.954 02:02:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:32.212 02:02:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:32.212 02:02:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:32.212 [2024-04-24 02:02:32.255786] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:32:32.212 [2024-04-24 02:02:32.256078] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:32.212 [2024-04-24 02:02:32.273595] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:32:32.212 [2024-04-24 02:02:32.276053] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:32.212 02:02:32 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:33.582 "name": "raid_bdev1", 00:32:33.582 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:33.582 "strip_size_kb": 0, 00:32:33.582 "state": "online", 00:32:33.582 "raid_level": "raid1", 00:32:33.582 "superblock": false, 00:32:33.582 "num_base_bdevs": 4, 00:32:33.582 "num_base_bdevs_discovered": 4, 00:32:33.582 "num_base_bdevs_operational": 4, 00:32:33.582 "process": { 00:32:33.582 "type": "rebuild", 00:32:33.582 "target": "spare", 00:32:33.582 "progress": { 00:32:33.582 "blocks": 24576, 00:32:33.582 "percent": 37 00:32:33.582 } 00:32:33.582 }, 00:32:33.582 "base_bdevs_list": [ 00:32:33.582 { 00:32:33.582 "name": "spare", 00:32:33.582 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:33.582 "is_configured": true, 00:32:33.582 "data_offset": 0, 00:32:33.582 "data_size": 65536 00:32:33.582 }, 00:32:33.582 { 00:32:33.582 "name": "BaseBdev2", 00:32:33.582 "uuid": "2bc8d5fc-941d-4617-b02a-ac3253c96fbe", 00:32:33.582 "is_configured": true, 00:32:33.582 "data_offset": 0, 00:32:33.582 "data_size": 65536 00:32:33.582 }, 00:32:33.582 { 00:32:33.582 "name": "BaseBdev3", 00:32:33.582 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:33.582 "is_configured": true, 00:32:33.582 "data_offset": 0, 00:32:33.582 "data_size": 65536 00:32:33.582 }, 00:32:33.582 { 00:32:33.582 "name": "BaseBdev4", 00:32:33.582 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:33.582 "is_configured": true, 00:32:33.582 "data_offset": 0, 00:32:33.582 "data_size": 65536 00:32:33.582 } 00:32:33.582 ] 00:32:33.582 }' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:32:33.582 02:02:33 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:33.839 [2024-04-24 02:02:33.846397] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:33.839 [2024-04-24 02:02:33.886328] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.839 02:02:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:34.406 "name": "raid_bdev1", 00:32:34.406 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:34.406 "strip_size_kb": 0, 00:32:34.406 "state": "online", 00:32:34.406 "raid_level": "raid1", 00:32:34.406 "superblock": false, 00:32:34.406 "num_base_bdevs": 4, 00:32:34.406 "num_base_bdevs_discovered": 3, 00:32:34.406 "num_base_bdevs_operational": 3, 00:32:34.406 "process": { 00:32:34.406 "type": "rebuild", 00:32:34.406 "target": "spare", 00:32:34.406 "progress": { 00:32:34.406 "blocks": 36864, 00:32:34.406 "percent": 56 00:32:34.406 } 00:32:34.406 }, 00:32:34.406 "base_bdevs_list": [ 00:32:34.406 { 00:32:34.406 "name": "spare", 00:32:34.406 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:34.406 "is_configured": true, 00:32:34.406 "data_offset": 0, 00:32:34.406 "data_size": 65536 00:32:34.406 }, 00:32:34.406 { 00:32:34.406 "name": null, 00:32:34.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.406 "is_configured": false, 00:32:34.406 "data_offset": 0, 00:32:34.406 "data_size": 65536 00:32:34.406 }, 00:32:34.406 { 00:32:34.406 "name": "BaseBdev3", 00:32:34.406 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:34.406 "is_configured": true, 00:32:34.406 "data_offset": 0, 00:32:34.406 "data_size": 65536 00:32:34.406 }, 00:32:34.406 { 00:32:34.406 "name": "BaseBdev4", 00:32:34.406 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:34.406 "is_configured": true, 00:32:34.406 "data_offset": 0, 00:32:34.406 "data_size": 65536 00:32:34.406 } 00:32:34.406 ] 00:32:34.406 }' 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@657 -- # local timeout=529 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.406 02:02:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:34.665 "name": "raid_bdev1", 00:32:34.665 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:34.665 "strip_size_kb": 0, 00:32:34.665 "state": "online", 00:32:34.665 "raid_level": "raid1", 00:32:34.665 "superblock": false, 00:32:34.665 "num_base_bdevs": 4, 00:32:34.665 "num_base_bdevs_discovered": 3, 00:32:34.665 "num_base_bdevs_operational": 3, 00:32:34.665 "process": { 00:32:34.665 "type": "rebuild", 00:32:34.665 "target": "spare", 00:32:34.665 "progress": { 00:32:34.665 "blocks": 45056, 00:32:34.665 "percent": 68 00:32:34.665 } 00:32:34.665 }, 00:32:34.665 "base_bdevs_list": [ 00:32:34.665 { 00:32:34.665 "name": "spare", 00:32:34.665 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:34.665 "is_configured": true, 00:32:34.665 "data_offset": 0, 00:32:34.665 "data_size": 65536 00:32:34.665 }, 00:32:34.665 { 00:32:34.665 "name": null, 00:32:34.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.665 "is_configured": false, 00:32:34.665 "data_offset": 0, 00:32:34.665 "data_size": 65536 00:32:34.665 }, 00:32:34.665 { 00:32:34.665 "name": "BaseBdev3", 00:32:34.665 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:34.665 "is_configured": true, 00:32:34.665 "data_offset": 0, 00:32:34.665 "data_size": 65536 00:32:34.665 }, 00:32:34.665 { 00:32:34.665 "name": "BaseBdev4", 00:32:34.665 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:34.665 "is_configured": true, 00:32:34.665 "data_offset": 0, 00:32:34.665 "data_size": 65536 00:32:34.665 } 00:32:34.665 ] 00:32:34.665 }' 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:34.665 02:02:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:35.598 [2024-04-24 02:02:35.496637] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:35.598 [2024-04-24 02:02:35.496967] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:35.598 [2024-04-24 02:02:35.497139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.856 02:02:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.115 02:02:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:36.115 "name": "raid_bdev1", 00:32:36.115 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:36.115 "strip_size_kb": 0, 00:32:36.115 "state": "online", 00:32:36.115 "raid_level": "raid1", 00:32:36.115 "superblock": false, 00:32:36.115 "num_base_bdevs": 4, 00:32:36.115 "num_base_bdevs_discovered": 3, 00:32:36.115 "num_base_bdevs_operational": 3, 00:32:36.115 "base_bdevs_list": [ 00:32:36.115 { 00:32:36.115 "name": "spare", 00:32:36.115 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:36.115 "is_configured": true, 00:32:36.115 "data_offset": 0, 00:32:36.115 "data_size": 65536 00:32:36.115 }, 00:32:36.115 { 00:32:36.115 "name": null, 00:32:36.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.115 "is_configured": false, 00:32:36.115 "data_offset": 0, 00:32:36.115 "data_size": 65536 00:32:36.115 }, 00:32:36.115 { 00:32:36.115 "name": "BaseBdev3", 00:32:36.115 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:36.115 "is_configured": true, 00:32:36.115 "data_offset": 0, 00:32:36.115 "data_size": 65536 00:32:36.115 }, 00:32:36.115 { 00:32:36.115 "name": "BaseBdev4", 00:32:36.115 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:36.115 "is_configured": true, 00:32:36.115 "data_offset": 0, 00:32:36.115 "data_size": 65536 00:32:36.115 } 00:32:36.115 ] 00:32:36.115 }' 00:32:36.115 02:02:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:36.115 02:02:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:36.115 02:02:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@660 -- # break 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.115 02:02:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:36.373 "name": "raid_bdev1", 00:32:36.373 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:36.373 "strip_size_kb": 0, 00:32:36.373 "state": "online", 00:32:36.373 "raid_level": "raid1", 00:32:36.373 "superblock": false, 00:32:36.373 "num_base_bdevs": 4, 00:32:36.373 "num_base_bdevs_discovered": 3, 00:32:36.373 "num_base_bdevs_operational": 3, 00:32:36.373 "base_bdevs_list": [ 00:32:36.373 { 00:32:36.373 "name": "spare", 00:32:36.373 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:36.373 "is_configured": true, 00:32:36.373 "data_offset": 0, 00:32:36.373 "data_size": 65536 00:32:36.373 }, 00:32:36.373 { 00:32:36.373 "name": null, 00:32:36.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.373 "is_configured": false, 00:32:36.373 "data_offset": 0, 00:32:36.373 "data_size": 65536 00:32:36.373 }, 00:32:36.373 { 00:32:36.373 "name": "BaseBdev3", 00:32:36.373 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:36.373 "is_configured": true, 00:32:36.373 "data_offset": 0, 00:32:36.373 "data_size": 65536 00:32:36.373 }, 00:32:36.373 { 00:32:36.373 "name": "BaseBdev4", 00:32:36.373 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:36.373 "is_configured": true, 00:32:36.373 "data_offset": 0, 00:32:36.373 "data_size": 65536 00:32:36.373 } 00:32:36.373 ] 00:32:36.373 }' 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.373 02:02:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.631 02:02:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:36.631 "name": "raid_bdev1", 00:32:36.631 "uuid": "4218e156-11de-4714-9d91-bddb809b7ad3", 00:32:36.631 "strip_size_kb": 0, 00:32:36.631 "state": "online", 00:32:36.631 "raid_level": "raid1", 00:32:36.631 "superblock": false, 00:32:36.631 "num_base_bdevs": 4, 00:32:36.631 "num_base_bdevs_discovered": 3, 00:32:36.631 "num_base_bdevs_operational": 3, 00:32:36.631 "base_bdevs_list": [ 00:32:36.631 { 00:32:36.631 "name": "spare", 00:32:36.631 "uuid": "e2524d35-c7c4-5982-b6b6-44aaf598f0a5", 00:32:36.631 "is_configured": true, 00:32:36.631 "data_offset": 0, 00:32:36.631 "data_size": 65536 00:32:36.631 }, 00:32:36.631 { 00:32:36.631 "name": null, 00:32:36.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.631 "is_configured": false, 00:32:36.631 "data_offset": 0, 00:32:36.631 "data_size": 65536 00:32:36.631 }, 00:32:36.631 { 00:32:36.631 "name": "BaseBdev3", 00:32:36.631 "uuid": "f6fab18b-faf6-494f-b1d3-885d94fe1386", 00:32:36.631 "is_configured": true, 00:32:36.631 "data_offset": 0, 00:32:36.631 "data_size": 65536 00:32:36.631 }, 00:32:36.631 { 00:32:36.631 "name": "BaseBdev4", 00:32:36.631 "uuid": "580ee08c-f8c0-4559-99b9-0a05cd19b94a", 00:32:36.631 "is_configured": true, 00:32:36.631 "data_offset": 0, 00:32:36.631 "data_size": 65536 00:32:36.631 } 00:32:36.631 ] 00:32:36.631 }' 00:32:36.631 02:02:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:36.631 02:02:36 -- common/autotest_common.sh@10 -- # set +x 00:32:37.197 02:02:37 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:37.455 [2024-04-24 02:02:37.397607] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:37.455 [2024-04-24 02:02:37.397819] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:37.455 [2024-04-24 02:02:37.398081] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:37.455 [2024-04-24 02:02:37.398270] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:37.455 [2024-04-24 02:02:37.398363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:32:37.455 02:02:37 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.455 02:02:37 -- bdev/bdev_raid.sh@671 -- # jq length 00:32:37.715 02:02:37 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:32:37.715 02:02:37 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:32:37.715 02:02:37 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@12 -- # local i 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:37.715 02:02:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:37.973 /dev/nbd0 00:32:37.973 02:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:37.973 02:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:37.973 02:02:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:32:37.973 02:02:37 -- common/autotest_common.sh@855 -- # local i 00:32:37.973 02:02:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:37.973 02:02:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:37.973 02:02:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:32:37.973 02:02:37 -- common/autotest_common.sh@859 -- # break 00:32:37.973 02:02:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:37.973 02:02:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:37.973 02:02:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:37.973 1+0 records in 00:32:37.973 1+0 records out 00:32:37.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519502 s, 7.9 MB/s 00:32:37.973 02:02:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.973 02:02:37 -- common/autotest_common.sh@872 -- # size=4096 00:32:37.973 02:02:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.973 02:02:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:37.973 02:02:37 -- common/autotest_common.sh@875 -- # return 0 00:32:37.973 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:37.973 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:37.973 02:02:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:38.230 /dev/nbd1 00:32:38.230 02:02:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:38.230 02:02:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:38.230 02:02:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:32:38.230 02:02:38 -- common/autotest_common.sh@855 -- # local i 00:32:38.230 02:02:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:38.230 02:02:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:38.230 02:02:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:32:38.230 02:02:38 -- common/autotest_common.sh@859 -- # break 00:32:38.230 02:02:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:38.230 02:02:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:38.230 02:02:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:38.230 1+0 records in 00:32:38.230 1+0 records out 00:32:38.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581627 s, 7.0 MB/s 00:32:38.230 02:02:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:38.230 02:02:38 -- common/autotest_common.sh@872 -- # size=4096 00:32:38.230 02:02:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:38.230 02:02:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:38.230 02:02:38 -- common/autotest_common.sh@875 -- # return 0 00:32:38.230 02:02:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:38.230 02:02:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:38.230 02:02:38 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:38.487 02:02:38 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@51 -- # local i 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:38.488 02:02:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@41 -- # break 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:38.745 02:02:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@41 -- # break 00:32:39.005 02:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:32:39.005 02:02:38 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:32:39.005 02:02:38 -- bdev/bdev_raid.sh@709 -- # killprocess 133429 00:32:39.005 02:02:38 -- common/autotest_common.sh@936 -- # '[' -z 133429 ']' 00:32:39.005 02:02:38 -- common/autotest_common.sh@940 -- # kill -0 133429 00:32:39.005 02:02:38 -- common/autotest_common.sh@941 -- # uname 00:32:39.005 02:02:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:39.005 02:02:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133429 00:32:39.005 02:02:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:39.005 02:02:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:39.005 02:02:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133429' 00:32:39.005 killing process with pid 133429 00:32:39.005 02:02:38 -- common/autotest_common.sh@955 -- # kill 133429 00:32:39.005 Received shutdown signal, test time was about 60.000000 seconds 00:32:39.005 00:32:39.005 Latency(us) 00:32:39.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.005 =================================================================================================================== 00:32:39.005 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:39.005 02:02:38 -- common/autotest_common.sh@960 -- # wait 133429 00:32:39.005 [2024-04-24 02:02:38.987579] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:39.572 [2024-04-24 02:02:39.562680] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:32:41.474 00:32:41.474 real 0m25.123s 00:32:41.474 user 0m33.665s 00:32:41.474 sys 0m5.011s 00:32:41.474 ************************************ 00:32:41.474 END TEST raid_rebuild_test 00:32:41.474 ************************************ 00:32:41.474 02:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:41.474 02:02:41 -- common/autotest_common.sh@10 -- # set +x 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:32:41.474 02:02:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:41.474 02:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:41.474 02:02:41 -- common/autotest_common.sh@10 -- # set +x 00:32:41.474 ************************************ 00:32:41.474 START TEST raid_rebuild_test_sb 00:32:41.474 ************************************ 00:32:41.474 02:02:41 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true false 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=134011 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:41.474 02:02:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134011 /var/tmp/spdk-raid.sock 00:32:41.474 02:02:41 -- common/autotest_common.sh@817 -- # '[' -z 134011 ']' 00:32:41.474 02:02:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:41.474 02:02:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:41.474 02:02:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:41.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:41.474 02:02:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:41.474 02:02:41 -- common/autotest_common.sh@10 -- # set +x 00:32:41.474 [2024-04-24 02:02:41.273545] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:32:41.474 [2024-04-24 02:02:41.273923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134011 ] 00:32:41.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:41.474 Zero copy mechanism will not be used. 00:32:41.474 [2024-04-24 02:02:41.442084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.733 [2024-04-24 02:02:41.686844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.991 [2024-04-24 02:02:41.946216] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:42.250 02:02:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:42.250 02:02:42 -- common/autotest_common.sh@850 -- # return 0 00:32:42.250 02:02:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:42.250 02:02:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:32:42.250 02:02:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:42.550 BaseBdev1_malloc 00:32:42.550 02:02:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:42.808 [2024-04-24 02:02:42.739757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:42.808 [2024-04-24 02:02:42.740072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.808 [2024-04-24 02:02:42.740347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:32:42.808 [2024-04-24 02:02:42.740556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.808 [2024-04-24 02:02:42.744138] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.808 [2024-04-24 02:02:42.744361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:42.808 BaseBdev1 00:32:42.808 02:02:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:42.808 02:02:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:32:42.808 02:02:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:43.067 BaseBdev2_malloc 00:32:43.067 02:02:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:43.326 [2024-04-24 02:02:43.349277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:43.326 [2024-04-24 02:02:43.349673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:43.326 [2024-04-24 02:02:43.349877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:43.326 [2024-04-24 02:02:43.350080] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:43.326 [2024-04-24 02:02:43.353376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:43.326 [2024-04-24 02:02:43.353641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:43.326 BaseBdev2 00:32:43.326 02:02:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:43.326 02:02:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:32:43.326 02:02:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:43.584 BaseBdev3_malloc 00:32:43.584 02:02:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:43.842 [2024-04-24 02:02:43.899371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:43.842 [2024-04-24 02:02:43.899653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:43.842 [2024-04-24 02:02:43.899740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:32:43.842 [2024-04-24 02:02:43.899977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:43.842 [2024-04-24 02:02:43.902637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:43.842 [2024-04-24 02:02:43.902838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:43.842 BaseBdev3 00:32:43.842 02:02:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:32:43.842 02:02:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:32:43.842 02:02:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:44.100 BaseBdev4_malloc 00:32:44.358 02:02:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:44.358 [2024-04-24 02:02:44.430001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:44.358 [2024-04-24 02:02:44.430350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.358 [2024-04-24 02:02:44.430433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:32:44.358 [2024-04-24 02:02:44.430678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.358 [2024-04-24 02:02:44.433466] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.358 [2024-04-24 02:02:44.433660] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:44.358 BaseBdev4 00:32:44.616 02:02:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:45.031 spare_malloc 00:32:45.031 02:02:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:45.031 spare_delay 00:32:45.031 02:02:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:45.291 [2024-04-24 02:02:45.352106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:45.291 [2024-04-24 02:02:45.352466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.291 [2024-04-24 02:02:45.352544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:32:45.291 [2024-04-24 02:02:45.352742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.291 [2024-04-24 02:02:45.355435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.291 [2024-04-24 02:02:45.355613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:45.291 spare 00:32:45.550 02:02:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:45.808 [2024-04-24 02:02:45.640289] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:45.808 [2024-04-24 02:02:45.642722] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:45.808 [2024-04-24 02:02:45.642952] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:45.808 [2024-04-24 02:02:45.643131] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:45.808 [2024-04-24 02:02:45.643439] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:32:45.808 [2024-04-24 02:02:45.643552] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:45.808 [2024-04-24 02:02:45.643747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:45.808 [2024-04-24 02:02:45.644209] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:32:45.808 [2024-04-24 02:02:45.644324] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:32:45.808 [2024-04-24 02:02:45.644614] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.808 02:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.066 02:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:46.066 "name": "raid_bdev1", 00:32:46.066 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:32:46.066 "strip_size_kb": 0, 00:32:46.066 "state": "online", 00:32:46.066 "raid_level": "raid1", 00:32:46.066 "superblock": true, 00:32:46.066 "num_base_bdevs": 4, 00:32:46.066 "num_base_bdevs_discovered": 4, 00:32:46.066 "num_base_bdevs_operational": 4, 00:32:46.066 "base_bdevs_list": [ 00:32:46.066 { 00:32:46.066 "name": "BaseBdev1", 00:32:46.066 "uuid": "0a881404-a288-5b3d-938d-c4a989731960", 00:32:46.066 "is_configured": true, 00:32:46.066 "data_offset": 2048, 00:32:46.066 "data_size": 63488 00:32:46.066 }, 00:32:46.066 { 00:32:46.066 "name": "BaseBdev2", 00:32:46.066 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:32:46.066 "is_configured": true, 00:32:46.066 "data_offset": 2048, 00:32:46.066 "data_size": 63488 00:32:46.066 }, 00:32:46.066 { 00:32:46.066 "name": "BaseBdev3", 00:32:46.066 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:32:46.066 "is_configured": true, 00:32:46.066 "data_offset": 2048, 00:32:46.066 "data_size": 63488 00:32:46.066 }, 00:32:46.066 { 00:32:46.066 "name": "BaseBdev4", 00:32:46.066 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:32:46.066 "is_configured": true, 00:32:46.066 "data_offset": 2048, 00:32:46.066 "data_size": 63488 00:32:46.066 } 00:32:46.066 ] 00:32:46.066 }' 00:32:46.066 02:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:46.066 02:02:45 -- common/autotest_common.sh@10 -- # set +x 00:32:46.634 02:02:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:32:46.634 02:02:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:46.892 [2024-04-24 02:02:46.789105] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.892 02:02:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:32:46.892 02:02:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.892 02:02:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:47.150 02:02:47 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:32:47.150 02:02:47 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:32:47.151 02:02:47 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:32:47.151 02:02:47 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@12 -- # local i 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:47.151 02:02:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:47.410 [2024-04-24 02:02:47.368979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:47.410 /dev/nbd0 00:32:47.410 02:02:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:47.410 02:02:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:47.410 02:02:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:32:47.410 02:02:47 -- common/autotest_common.sh@855 -- # local i 00:32:47.410 02:02:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:32:47.410 02:02:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:32:47.410 02:02:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:32:47.410 02:02:47 -- common/autotest_common.sh@859 -- # break 00:32:47.410 02:02:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:47.410 02:02:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:47.410 02:02:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:47.410 1+0 records in 00:32:47.410 1+0 records out 00:32:47.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703754 s, 5.8 MB/s 00:32:47.410 02:02:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:47.410 02:02:47 -- common/autotest_common.sh@872 -- # size=4096 00:32:47.410 02:02:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:47.410 02:02:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:32:47.410 02:02:47 -- common/autotest_common.sh@875 -- # return 0 00:32:47.410 02:02:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:47.410 02:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:47.410 02:02:47 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:32:47.410 02:02:47 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:32:47.410 02:02:47 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:32:55.522 63488+0 records in 00:32:55.522 63488+0 records out 00:32:55.522 32505856 bytes (33 MB, 31 MiB) copied, 7.06859 s, 4.6 MB/s 00:32:55.522 02:02:54 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@51 -- # local i 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:55.522 [2024-04-24 02:02:54.764068] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@41 -- # break 00:32:55.522 02:02:54 -- bdev/nbd_common.sh@45 -- # return 0 00:32:55.522 02:02:54 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:55.522 [2024-04-24 02:02:55.064285] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.522 02:02:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:55.522 "name": "raid_bdev1", 00:32:55.522 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:32:55.522 "strip_size_kb": 0, 00:32:55.522 "state": "online", 00:32:55.522 "raid_level": "raid1", 00:32:55.522 "superblock": true, 00:32:55.522 "num_base_bdevs": 4, 00:32:55.522 "num_base_bdevs_discovered": 3, 00:32:55.522 "num_base_bdevs_operational": 3, 00:32:55.522 "base_bdevs_list": [ 00:32:55.522 { 00:32:55.522 "name": null, 00:32:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.522 "is_configured": false, 00:32:55.522 "data_offset": 2048, 00:32:55.522 "data_size": 63488 00:32:55.522 }, 00:32:55.522 { 00:32:55.522 "name": "BaseBdev2", 00:32:55.522 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:32:55.522 "is_configured": true, 00:32:55.522 "data_offset": 2048, 00:32:55.522 "data_size": 63488 00:32:55.522 }, 00:32:55.522 { 00:32:55.522 "name": "BaseBdev3", 00:32:55.522 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:32:55.522 "is_configured": true, 00:32:55.522 "data_offset": 2048, 00:32:55.522 "data_size": 63488 00:32:55.522 }, 00:32:55.522 { 00:32:55.522 "name": "BaseBdev4", 00:32:55.522 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:32:55.522 "is_configured": true, 00:32:55.522 "data_offset": 2048, 00:32:55.522 "data_size": 63488 00:32:55.522 } 00:32:55.522 ] 00:32:55.523 }' 00:32:55.523 02:02:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:55.523 02:02:55 -- common/autotest_common.sh@10 -- # set +x 00:32:56.090 02:02:55 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:56.090 [2024-04-24 02:02:56.172517] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:32:56.090 [2024-04-24 02:02:56.172749] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:56.347 [2024-04-24 02:02:56.192979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:32:56.347 [2024-04-24 02:02:56.195416] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:56.347 02:02:56 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.339 02:02:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:57.598 "name": "raid_bdev1", 00:32:57.598 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:32:57.598 "strip_size_kb": 0, 00:32:57.598 "state": "online", 00:32:57.598 "raid_level": "raid1", 00:32:57.598 "superblock": true, 00:32:57.598 "num_base_bdevs": 4, 00:32:57.598 "num_base_bdevs_discovered": 4, 00:32:57.598 "num_base_bdevs_operational": 4, 00:32:57.598 "process": { 00:32:57.598 "type": "rebuild", 00:32:57.598 "target": "spare", 00:32:57.598 "progress": { 00:32:57.598 "blocks": 24576, 00:32:57.598 "percent": 38 00:32:57.598 } 00:32:57.598 }, 00:32:57.598 "base_bdevs_list": [ 00:32:57.598 { 00:32:57.598 "name": "spare", 00:32:57.598 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:32:57.598 "is_configured": true, 00:32:57.598 "data_offset": 2048, 00:32:57.598 "data_size": 63488 00:32:57.598 }, 00:32:57.598 { 00:32:57.598 "name": "BaseBdev2", 00:32:57.598 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:32:57.598 "is_configured": true, 00:32:57.598 "data_offset": 2048, 00:32:57.598 "data_size": 63488 00:32:57.598 }, 00:32:57.598 { 00:32:57.598 "name": "BaseBdev3", 00:32:57.598 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:32:57.598 "is_configured": true, 00:32:57.598 "data_offset": 2048, 00:32:57.598 "data_size": 63488 00:32:57.598 }, 00:32:57.598 { 00:32:57.598 "name": "BaseBdev4", 00:32:57.598 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:32:57.598 "is_configured": true, 00:32:57.598 "data_offset": 2048, 00:32:57.598 "data_size": 63488 00:32:57.598 } 00:32:57.598 ] 00:32:57.598 }' 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.598 02:02:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:57.860 [2024-04-24 02:02:57.857173] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:57.860 [2024-04-24 02:02:57.906315] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:57.860 [2024-04-24 02:02:57.906573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.142 02:02:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.142 02:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:58.142 "name": "raid_bdev1", 00:32:58.142 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:32:58.142 "strip_size_kb": 0, 00:32:58.142 "state": "online", 00:32:58.142 "raid_level": "raid1", 00:32:58.142 "superblock": true, 00:32:58.142 "num_base_bdevs": 4, 00:32:58.142 "num_base_bdevs_discovered": 3, 00:32:58.142 "num_base_bdevs_operational": 3, 00:32:58.142 "base_bdevs_list": [ 00:32:58.142 { 00:32:58.142 "name": null, 00:32:58.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.142 "is_configured": false, 00:32:58.142 "data_offset": 2048, 00:32:58.142 "data_size": 63488 00:32:58.142 }, 00:32:58.142 { 00:32:58.142 "name": "BaseBdev2", 00:32:58.142 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:32:58.142 "is_configured": true, 00:32:58.142 "data_offset": 2048, 00:32:58.142 "data_size": 63488 00:32:58.142 }, 00:32:58.142 { 00:32:58.142 "name": "BaseBdev3", 00:32:58.142 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:32:58.142 "is_configured": true, 00:32:58.142 "data_offset": 2048, 00:32:58.142 "data_size": 63488 00:32:58.142 }, 00:32:58.142 { 00:32:58.142 "name": "BaseBdev4", 00:32:58.142 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:32:58.142 "is_configured": true, 00:32:58.142 "data_offset": 2048, 00:32:58.142 "data_size": 63488 00:32:58.142 } 00:32:58.142 ] 00:32:58.142 }' 00:32:58.142 02:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:58.142 02:02:58 -- common/autotest_common.sh@10 -- # set +x 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.080 02:02:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.080 02:02:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:32:59.080 "name": "raid_bdev1", 00:32:59.080 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:32:59.080 "strip_size_kb": 0, 00:32:59.080 "state": "online", 00:32:59.080 "raid_level": "raid1", 00:32:59.080 "superblock": true, 00:32:59.080 "num_base_bdevs": 4, 00:32:59.080 "num_base_bdevs_discovered": 3, 00:32:59.080 "num_base_bdevs_operational": 3, 00:32:59.080 "base_bdevs_list": [ 00:32:59.080 { 00:32:59.080 "name": null, 00:32:59.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.080 "is_configured": false, 00:32:59.080 "data_offset": 2048, 00:32:59.080 "data_size": 63488 00:32:59.080 }, 00:32:59.080 { 00:32:59.080 "name": "BaseBdev2", 00:32:59.080 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:32:59.080 "is_configured": true, 00:32:59.080 "data_offset": 2048, 00:32:59.080 "data_size": 63488 00:32:59.080 }, 00:32:59.080 { 00:32:59.080 "name": "BaseBdev3", 00:32:59.080 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:32:59.080 "is_configured": true, 00:32:59.080 "data_offset": 2048, 00:32:59.080 "data_size": 63488 00:32:59.080 }, 00:32:59.080 { 00:32:59.080 "name": "BaseBdev4", 00:32:59.080 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:32:59.080 "is_configured": true, 00:32:59.080 "data_offset": 2048, 00:32:59.080 "data_size": 63488 00:32:59.080 } 00:32:59.080 ] 00:32:59.080 }' 00:32:59.080 02:02:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:32:59.338 02:02:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:59.338 02:02:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:32:59.338 02:02:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:32:59.338 02:02:59 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:59.596 [2024-04-24 02:02:59.473955] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:32:59.596 [2024-04-24 02:02:59.474227] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.596 [2024-04-24 02:02:59.491924] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:32:59.596 [2024-04-24 02:02:59.494411] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:59.596 02:02:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.528 02:03:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:00.861 "name": "raid_bdev1", 00:33:00.861 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:00.861 "strip_size_kb": 0, 00:33:00.861 "state": "online", 00:33:00.861 "raid_level": "raid1", 00:33:00.861 "superblock": true, 00:33:00.861 "num_base_bdevs": 4, 00:33:00.861 "num_base_bdevs_discovered": 4, 00:33:00.861 "num_base_bdevs_operational": 4, 00:33:00.861 "process": { 00:33:00.861 "type": "rebuild", 00:33:00.861 "target": "spare", 00:33:00.861 "progress": { 00:33:00.861 "blocks": 24576, 00:33:00.861 "percent": 38 00:33:00.861 } 00:33:00.861 }, 00:33:00.861 "base_bdevs_list": [ 00:33:00.861 { 00:33:00.861 "name": "spare", 00:33:00.861 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:00.861 "is_configured": true, 00:33:00.861 "data_offset": 2048, 00:33:00.861 "data_size": 63488 00:33:00.861 }, 00:33:00.861 { 00:33:00.861 "name": "BaseBdev2", 00:33:00.861 "uuid": "1dea811e-ec03-544b-a3d6-2602a94c9cfc", 00:33:00.861 "is_configured": true, 00:33:00.861 "data_offset": 2048, 00:33:00.861 "data_size": 63488 00:33:00.861 }, 00:33:00.861 { 00:33:00.861 "name": "BaseBdev3", 00:33:00.861 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:00.861 "is_configured": true, 00:33:00.861 "data_offset": 2048, 00:33:00.861 "data_size": 63488 00:33:00.861 }, 00:33:00.861 { 00:33:00.861 "name": "BaseBdev4", 00:33:00.861 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:00.861 "is_configured": true, 00:33:00.861 "data_offset": 2048, 00:33:00.861 "data_size": 63488 00:33:00.861 } 00:33:00.861 ] 00:33:00.861 }' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:33:00.861 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:33:00.861 02:03:00 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:01.118 [2024-04-24 02:03:01.143798] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:01.376 [2024-04-24 02:03:01.205353] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.376 02:03:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:01.636 "name": "raid_bdev1", 00:33:01.636 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:01.636 "strip_size_kb": 0, 00:33:01.636 "state": "online", 00:33:01.636 "raid_level": "raid1", 00:33:01.636 "superblock": true, 00:33:01.636 "num_base_bdevs": 4, 00:33:01.636 "num_base_bdevs_discovered": 3, 00:33:01.636 "num_base_bdevs_operational": 3, 00:33:01.636 "process": { 00:33:01.636 "type": "rebuild", 00:33:01.636 "target": "spare", 00:33:01.636 "progress": { 00:33:01.636 "blocks": 40960, 00:33:01.636 "percent": 64 00:33:01.636 } 00:33:01.636 }, 00:33:01.636 "base_bdevs_list": [ 00:33:01.636 { 00:33:01.636 "name": "spare", 00:33:01.636 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:01.636 "is_configured": true, 00:33:01.636 "data_offset": 2048, 00:33:01.636 "data_size": 63488 00:33:01.636 }, 00:33:01.636 { 00:33:01.636 "name": null, 00:33:01.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.636 "is_configured": false, 00:33:01.636 "data_offset": 2048, 00:33:01.636 "data_size": 63488 00:33:01.636 }, 00:33:01.636 { 00:33:01.636 "name": "BaseBdev3", 00:33:01.636 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:01.636 "is_configured": true, 00:33:01.636 "data_offset": 2048, 00:33:01.636 "data_size": 63488 00:33:01.636 }, 00:33:01.636 { 00:33:01.636 "name": "BaseBdev4", 00:33:01.636 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:01.636 "is_configured": true, 00:33:01.636 "data_offset": 2048, 00:33:01.636 "data_size": 63488 00:33:01.636 } 00:33:01.636 ] 00:33:01.636 }' 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@657 -- # local timeout=556 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.636 02:03:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.895 02:03:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:01.895 "name": "raid_bdev1", 00:33:01.895 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:01.895 "strip_size_kb": 0, 00:33:01.895 "state": "online", 00:33:01.895 "raid_level": "raid1", 00:33:01.895 "superblock": true, 00:33:01.895 "num_base_bdevs": 4, 00:33:01.895 "num_base_bdevs_discovered": 3, 00:33:01.895 "num_base_bdevs_operational": 3, 00:33:01.895 "process": { 00:33:01.895 "type": "rebuild", 00:33:01.895 "target": "spare", 00:33:01.895 "progress": { 00:33:01.895 "blocks": 49152, 00:33:01.895 "percent": 77 00:33:01.895 } 00:33:01.895 }, 00:33:01.895 "base_bdevs_list": [ 00:33:01.895 { 00:33:01.895 "name": "spare", 00:33:01.895 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:01.895 "is_configured": true, 00:33:01.895 "data_offset": 2048, 00:33:01.895 "data_size": 63488 00:33:01.895 }, 00:33:01.895 { 00:33:01.895 "name": null, 00:33:01.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.895 "is_configured": false, 00:33:01.895 "data_offset": 2048, 00:33:01.895 "data_size": 63488 00:33:01.895 }, 00:33:01.895 { 00:33:01.895 "name": "BaseBdev3", 00:33:01.895 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:01.895 "is_configured": true, 00:33:01.895 "data_offset": 2048, 00:33:01.895 "data_size": 63488 00:33:01.895 }, 00:33:01.895 { 00:33:01.895 "name": "BaseBdev4", 00:33:01.895 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:01.895 "is_configured": true, 00:33:01.895 "data_offset": 2048, 00:33:01.895 "data_size": 63488 00:33:01.895 } 00:33:01.895 ] 00:33:01.895 }' 00:33:01.895 02:03:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:02.153 02:03:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:02.153 02:03:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:02.153 02:03:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:02.153 02:03:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:02.720 [2024-04-24 02:03:02.614641] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:02.720 [2024-04-24 02:03:02.614928] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:02.720 [2024-04-24 02:03:02.615242] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:02.979 02:03:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:03.237 02:03:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.237 02:03:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:03.496 "name": "raid_bdev1", 00:33:03.496 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:03.496 "strip_size_kb": 0, 00:33:03.496 "state": "online", 00:33:03.496 "raid_level": "raid1", 00:33:03.496 "superblock": true, 00:33:03.496 "num_base_bdevs": 4, 00:33:03.496 "num_base_bdevs_discovered": 3, 00:33:03.496 "num_base_bdevs_operational": 3, 00:33:03.496 "base_bdevs_list": [ 00:33:03.496 { 00:33:03.496 "name": "spare", 00:33:03.496 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:03.496 "is_configured": true, 00:33:03.496 "data_offset": 2048, 00:33:03.496 "data_size": 63488 00:33:03.496 }, 00:33:03.496 { 00:33:03.496 "name": null, 00:33:03.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.496 "is_configured": false, 00:33:03.496 "data_offset": 2048, 00:33:03.496 "data_size": 63488 00:33:03.496 }, 00:33:03.496 { 00:33:03.496 "name": "BaseBdev3", 00:33:03.496 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:03.496 "is_configured": true, 00:33:03.496 "data_offset": 2048, 00:33:03.496 "data_size": 63488 00:33:03.496 }, 00:33:03.496 { 00:33:03.496 "name": "BaseBdev4", 00:33:03.496 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:03.496 "is_configured": true, 00:33:03.496 "data_offset": 2048, 00:33:03.496 "data_size": 63488 00:33:03.496 } 00:33:03.496 ] 00:33:03.496 }' 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@660 -- # break 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.496 02:03:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:03.755 "name": "raid_bdev1", 00:33:03.755 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:03.755 "strip_size_kb": 0, 00:33:03.755 "state": "online", 00:33:03.755 "raid_level": "raid1", 00:33:03.755 "superblock": true, 00:33:03.755 "num_base_bdevs": 4, 00:33:03.755 "num_base_bdevs_discovered": 3, 00:33:03.755 "num_base_bdevs_operational": 3, 00:33:03.755 "base_bdevs_list": [ 00:33:03.755 { 00:33:03.755 "name": "spare", 00:33:03.755 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:03.755 "is_configured": true, 00:33:03.755 "data_offset": 2048, 00:33:03.755 "data_size": 63488 00:33:03.755 }, 00:33:03.755 { 00:33:03.755 "name": null, 00:33:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.755 "is_configured": false, 00:33:03.755 "data_offset": 2048, 00:33:03.755 "data_size": 63488 00:33:03.755 }, 00:33:03.755 { 00:33:03.755 "name": "BaseBdev3", 00:33:03.755 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:03.755 "is_configured": true, 00:33:03.755 "data_offset": 2048, 00:33:03.755 "data_size": 63488 00:33:03.755 }, 00:33:03.755 { 00:33:03.755 "name": "BaseBdev4", 00:33:03.755 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:03.755 "is_configured": true, 00:33:03.755 "data_offset": 2048, 00:33:03.755 "data_size": 63488 00:33:03.755 } 00:33:03.755 ] 00:33:03.755 }' 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.755 02:03:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.033 02:03:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:04.033 "name": "raid_bdev1", 00:33:04.033 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:04.033 "strip_size_kb": 0, 00:33:04.033 "state": "online", 00:33:04.033 "raid_level": "raid1", 00:33:04.033 "superblock": true, 00:33:04.033 "num_base_bdevs": 4, 00:33:04.033 "num_base_bdevs_discovered": 3, 00:33:04.033 "num_base_bdevs_operational": 3, 00:33:04.033 "base_bdevs_list": [ 00:33:04.033 { 00:33:04.033 "name": "spare", 00:33:04.033 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:04.033 "is_configured": true, 00:33:04.033 "data_offset": 2048, 00:33:04.033 "data_size": 63488 00:33:04.033 }, 00:33:04.033 { 00:33:04.033 "name": null, 00:33:04.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.033 "is_configured": false, 00:33:04.033 "data_offset": 2048, 00:33:04.033 "data_size": 63488 00:33:04.033 }, 00:33:04.033 { 00:33:04.033 "name": "BaseBdev3", 00:33:04.033 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:04.033 "is_configured": true, 00:33:04.033 "data_offset": 2048, 00:33:04.033 "data_size": 63488 00:33:04.033 }, 00:33:04.033 { 00:33:04.033 "name": "BaseBdev4", 00:33:04.033 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:04.033 "is_configured": true, 00:33:04.033 "data_offset": 2048, 00:33:04.033 "data_size": 63488 00:33:04.033 } 00:33:04.033 ] 00:33:04.033 }' 00:33:04.033 02:03:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:04.033 02:03:04 -- common/autotest_common.sh@10 -- # set +x 00:33:04.967 02:03:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:05.226 [2024-04-24 02:03:05.053424] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:05.226 [2024-04-24 02:03:05.053594] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:05.226 [2024-04-24 02:03:05.053752] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.226 [2024-04-24 02:03:05.053872] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.226 [2024-04-24 02:03:05.054110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:33:05.226 02:03:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.226 02:03:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:33:05.485 02:03:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:33:05.485 02:03:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:33:05.485 02:03:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@12 -- # local i 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:05.485 02:03:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:05.744 /dev/nbd0 00:33:05.744 02:03:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:05.744 02:03:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:05.744 02:03:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:05.744 02:03:05 -- common/autotest_common.sh@855 -- # local i 00:33:05.744 02:03:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:05.744 02:03:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:05.744 02:03:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:05.744 02:03:05 -- common/autotest_common.sh@859 -- # break 00:33:05.744 02:03:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:05.744 02:03:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:05.744 02:03:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.744 1+0 records in 00:33:05.744 1+0 records out 00:33:05.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594578 s, 6.9 MB/s 00:33:05.744 02:03:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:05.744 02:03:05 -- common/autotest_common.sh@872 -- # size=4096 00:33:05.744 02:03:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:05.744 02:03:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:05.744 02:03:05 -- common/autotest_common.sh@875 -- # return 0 00:33:05.745 02:03:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.745 02:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:05.745 02:03:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:06.003 /dev/nbd1 00:33:06.003 02:03:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:06.003 02:03:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:06.003 02:03:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:06.003 02:03:05 -- common/autotest_common.sh@855 -- # local i 00:33:06.003 02:03:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:06.003 02:03:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:06.003 02:03:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:06.003 02:03:05 -- common/autotest_common.sh@859 -- # break 00:33:06.003 02:03:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:06.003 02:03:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:06.003 02:03:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:06.003 1+0 records in 00:33:06.003 1+0 records out 00:33:06.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697906 s, 5.9 MB/s 00:33:06.003 02:03:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:06.003 02:03:06 -- common/autotest_common.sh@872 -- # size=4096 00:33:06.003 02:03:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:06.003 02:03:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:06.003 02:03:06 -- common/autotest_common.sh@875 -- # return 0 00:33:06.003 02:03:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:06.003 02:03:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:06.003 02:03:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:06.261 02:03:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@51 -- # local i 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.261 02:03:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@41 -- # break 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.519 02:03:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@41 -- # break 00:33:06.778 02:03:06 -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.778 02:03:06 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:33:06.778 02:03:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:06.778 02:03:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:33:06.778 02:03:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:07.035 02:03:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:07.603 [2024-04-24 02:03:07.427284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:07.603 [2024-04-24 02:03:07.427377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:07.603 [2024-04-24 02:03:07.427419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:07.603 [2024-04-24 02:03:07.427441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:07.603 [2024-04-24 02:03:07.430008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:07.603 [2024-04-24 02:03:07.430077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:07.603 [2024-04-24 02:03:07.430202] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:07.603 [2024-04-24 02:03:07.430254] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:07.603 BaseBdev1 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@696 -- # continue 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:33:07.603 02:03:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:33:07.861 02:03:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:08.120 [2024-04-24 02:03:08.011443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:08.120 [2024-04-24 02:03:08.011533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.120 [2024-04-24 02:03:08.011576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:33:08.120 [2024-04-24 02:03:08.011605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.120 [2024-04-24 02:03:08.012086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.120 [2024-04-24 02:03:08.012159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:08.120 [2024-04-24 02:03:08.012280] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:33:08.120 [2024-04-24 02:03:08.012292] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:33:08.120 [2024-04-24 02:03:08.012300] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:08.120 [2024-04-24 02:03:08.012327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:33:08.120 [2024-04-24 02:03:08.012410] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:08.120 BaseBdev3 00:33:08.120 02:03:08 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:08.120 02:03:08 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:33:08.120 02:03:08 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:33:08.379 02:03:08 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:08.637 [2024-04-24 02:03:08.559613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:08.637 [2024-04-24 02:03:08.559722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.637 [2024-04-24 02:03:08.559760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:08.637 [2024-04-24 02:03:08.559788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.637 [2024-04-24 02:03:08.560316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.637 [2024-04-24 02:03:08.560375] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:08.637 [2024-04-24 02:03:08.560487] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:33:08.637 [2024-04-24 02:03:08.560511] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:08.637 BaseBdev4 00:33:08.637 02:03:08 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:08.896 02:03:08 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:08.896 [2024-04-24 02:03:08.964706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:08.896 [2024-04-24 02:03:08.964804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.896 [2024-04-24 02:03:08.964841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:08.896 [2024-04-24 02:03:08.964869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.896 [2024-04-24 02:03:08.965374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.896 [2024-04-24 02:03:08.965433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:08.896 [2024-04-24 02:03:08.965579] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:33:08.896 [2024-04-24 02:03:08.965624] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:08.896 spare 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.154 02:03:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.154 [2024-04-24 02:03:09.065745] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:33:09.154 [2024-04-24 02:03:09.065778] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:09.154 [2024-04-24 02:03:09.065953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:33:09.154 [2024-04-24 02:03:09.066392] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:33:09.154 [2024-04-24 02:03:09.066411] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:33:09.154 [2024-04-24 02:03:09.066581] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:09.413 02:03:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:09.413 "name": "raid_bdev1", 00:33:09.413 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:09.413 "strip_size_kb": 0, 00:33:09.413 "state": "online", 00:33:09.413 "raid_level": "raid1", 00:33:09.413 "superblock": true, 00:33:09.413 "num_base_bdevs": 4, 00:33:09.413 "num_base_bdevs_discovered": 3, 00:33:09.413 "num_base_bdevs_operational": 3, 00:33:09.413 "base_bdevs_list": [ 00:33:09.413 { 00:33:09.413 "name": "spare", 00:33:09.413 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:09.413 "is_configured": true, 00:33:09.413 "data_offset": 2048, 00:33:09.413 "data_size": 63488 00:33:09.413 }, 00:33:09.413 { 00:33:09.413 "name": null, 00:33:09.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.413 "is_configured": false, 00:33:09.413 "data_offset": 2048, 00:33:09.413 "data_size": 63488 00:33:09.413 }, 00:33:09.413 { 00:33:09.413 "name": "BaseBdev3", 00:33:09.413 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:09.413 "is_configured": true, 00:33:09.413 "data_offset": 2048, 00:33:09.413 "data_size": 63488 00:33:09.413 }, 00:33:09.413 { 00:33:09.413 "name": "BaseBdev4", 00:33:09.413 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:09.413 "is_configured": true, 00:33:09.413 "data_offset": 2048, 00:33:09.413 "data_size": 63488 00:33:09.413 } 00:33:09.413 ] 00:33:09.413 }' 00:33:09.413 02:03:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:09.413 02:03:09 -- common/autotest_common.sh@10 -- # set +x 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.978 02:03:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.237 02:03:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:10.237 "name": "raid_bdev1", 00:33:10.237 "uuid": "0873710b-b614-462b-968d-f7bc21ba85d9", 00:33:10.237 "strip_size_kb": 0, 00:33:10.237 "state": "online", 00:33:10.237 "raid_level": "raid1", 00:33:10.237 "superblock": true, 00:33:10.237 "num_base_bdevs": 4, 00:33:10.237 "num_base_bdevs_discovered": 3, 00:33:10.237 "num_base_bdevs_operational": 3, 00:33:10.237 "base_bdevs_list": [ 00:33:10.237 { 00:33:10.237 "name": "spare", 00:33:10.237 "uuid": "489ba8a8-a877-5bb5-b289-86228aca5f93", 00:33:10.237 "is_configured": true, 00:33:10.237 "data_offset": 2048, 00:33:10.237 "data_size": 63488 00:33:10.237 }, 00:33:10.237 { 00:33:10.237 "name": null, 00:33:10.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.237 "is_configured": false, 00:33:10.237 "data_offset": 2048, 00:33:10.237 "data_size": 63488 00:33:10.237 }, 00:33:10.237 { 00:33:10.237 "name": "BaseBdev3", 00:33:10.237 "uuid": "102566bf-ec56-5195-a490-598a99da9ac2", 00:33:10.237 "is_configured": true, 00:33:10.237 "data_offset": 2048, 00:33:10.237 "data_size": 63488 00:33:10.237 }, 00:33:10.237 { 00:33:10.237 "name": "BaseBdev4", 00:33:10.237 "uuid": "512def9f-7e1d-5fa8-a7e3-d06583562c1a", 00:33:10.237 "is_configured": true, 00:33:10.237 "data_offset": 2048, 00:33:10.237 "data_size": 63488 00:33:10.237 } 00:33:10.237 ] 00:33:10.237 }' 00:33:10.237 02:03:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:10.237 02:03:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:10.237 02:03:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:10.496 02:03:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:10.496 02:03:10 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.496 02:03:10 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:10.754 02:03:10 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:33:10.754 02:03:10 -- bdev/bdev_raid.sh@709 -- # killprocess 134011 00:33:10.754 02:03:10 -- common/autotest_common.sh@936 -- # '[' -z 134011 ']' 00:33:10.754 02:03:10 -- common/autotest_common.sh@940 -- # kill -0 134011 00:33:10.754 02:03:10 -- common/autotest_common.sh@941 -- # uname 00:33:10.754 02:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:10.754 02:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134011 00:33:10.754 02:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:10.754 killing process with pid 134011 00:33:10.754 Received shutdown signal, test time was about 60.000000 seconds 00:33:10.754 00:33:10.754 Latency(us) 00:33:10.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.754 =================================================================================================================== 00:33:10.754 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:10.754 02:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:10.754 02:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134011' 00:33:10.754 02:03:10 -- common/autotest_common.sh@955 -- # kill 134011 00:33:10.754 02:03:10 -- common/autotest_common.sh@960 -- # wait 134011 00:33:10.754 [2024-04-24 02:03:10.613501] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:10.754 [2024-04-24 02:03:10.613586] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:10.754 [2024-04-24 02:03:10.613669] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:10.754 [2024-04-24 02:03:10.613685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:33:11.321 [2024-04-24 02:03:11.171487] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:12.693 02:03:12 -- bdev/bdev_raid.sh@711 -- # return 0 00:33:12.693 00:33:12.693 real 0m31.518s 00:33:12.693 user 0m45.004s 00:33:12.693 sys 0m5.953s 00:33:12.693 02:03:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:12.693 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:33:12.693 ************************************ 00:33:12.693 END TEST raid_rebuild_test_sb 00:33:12.693 ************************************ 00:33:12.693 02:03:12 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:33:12.693 02:03:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:33:12.693 02:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:12.693 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:33:12.950 ************************************ 00:33:12.950 START TEST raid_rebuild_test_io 00:33:12.950 ************************************ 00:33:12.950 02:03:12 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false true 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@544 -- # raid_pid=134718 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:12.950 02:03:12 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134718 /var/tmp/spdk-raid.sock 00:33:12.950 02:03:12 -- common/autotest_common.sh@817 -- # '[' -z 134718 ']' 00:33:12.950 02:03:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:12.950 02:03:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:12.950 02:03:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:12.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:12.950 02:03:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:12.950 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:33:12.950 [2024-04-24 02:03:12.882350] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:33:12.951 [2024-04-24 02:03:12.882535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134718 ] 00:33:12.951 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:12.951 Zero copy mechanism will not be used. 00:33:13.207 [2024-04-24 02:03:13.059662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.207 [2024-04-24 02:03:13.288956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.465 [2024-04-24 02:03:13.545928] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:14.055 02:03:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:14.055 02:03:13 -- common/autotest_common.sh@850 -- # return 0 00:33:14.055 02:03:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:14.055 02:03:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:33:14.055 02:03:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:14.368 BaseBdev1 00:33:14.368 02:03:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:14.368 02:03:14 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:33:14.368 02:03:14 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:14.625 BaseBdev2 00:33:14.625 02:03:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:14.625 02:03:14 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:33:14.625 02:03:14 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:14.883 BaseBdev3 00:33:14.883 02:03:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:14.883 02:03:14 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:33:14.883 02:03:14 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:15.142 BaseBdev4 00:33:15.142 02:03:15 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:15.400 spare_malloc 00:33:15.400 02:03:15 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:15.658 spare_delay 00:33:15.916 02:03:15 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:16.174 [2024-04-24 02:03:16.015282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:16.174 [2024-04-24 02:03:16.015393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.174 [2024-04-24 02:03:16.015432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:33:16.174 [2024-04-24 02:03:16.015483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.174 [2024-04-24 02:03:16.018213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.174 [2024-04-24 02:03:16.018275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:16.174 spare 00:33:16.174 02:03:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:16.432 [2024-04-24 02:03:16.311419] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:16.432 [2024-04-24 02:03:16.313713] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:16.432 [2024-04-24 02:03:16.313784] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:16.432 [2024-04-24 02:03:16.313819] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:16.432 [2024-04-24 02:03:16.313904] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:33:16.432 [2024-04-24 02:03:16.313914] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:16.432 [2024-04-24 02:03:16.314119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:33:16.432 [2024-04-24 02:03:16.314492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:33:16.432 [2024-04-24 02:03:16.314513] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:33:16.432 [2024-04-24 02:03:16.314683] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.432 02:03:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.691 02:03:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:16.691 "name": "raid_bdev1", 00:33:16.691 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:16.691 "strip_size_kb": 0, 00:33:16.691 "state": "online", 00:33:16.691 "raid_level": "raid1", 00:33:16.691 "superblock": false, 00:33:16.691 "num_base_bdevs": 4, 00:33:16.691 "num_base_bdevs_discovered": 4, 00:33:16.691 "num_base_bdevs_operational": 4, 00:33:16.691 "base_bdevs_list": [ 00:33:16.691 { 00:33:16.691 "name": "BaseBdev1", 00:33:16.691 "uuid": "7ededfc9-1694-48d1-947e-0e1f604837db", 00:33:16.691 "is_configured": true, 00:33:16.691 "data_offset": 0, 00:33:16.691 "data_size": 65536 00:33:16.691 }, 00:33:16.691 { 00:33:16.691 "name": "BaseBdev2", 00:33:16.691 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:16.691 "is_configured": true, 00:33:16.691 "data_offset": 0, 00:33:16.691 "data_size": 65536 00:33:16.691 }, 00:33:16.691 { 00:33:16.691 "name": "BaseBdev3", 00:33:16.691 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:16.691 "is_configured": true, 00:33:16.691 "data_offset": 0, 00:33:16.691 "data_size": 65536 00:33:16.691 }, 00:33:16.691 { 00:33:16.691 "name": "BaseBdev4", 00:33:16.691 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:16.691 "is_configured": true, 00:33:16.691 "data_offset": 0, 00:33:16.691 "data_size": 65536 00:33:16.691 } 00:33:16.691 ] 00:33:16.691 }' 00:33:16.691 02:03:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:16.691 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:17.257 02:03:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:17.257 02:03:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:33:17.515 [2024-04-24 02:03:17.463996] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.515 02:03:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:33:17.515 02:03:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.515 02:03:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:17.773 02:03:17 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:33:17.773 02:03:17 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:33:17.773 02:03:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:17.773 02:03:17 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:33:18.031 [2024-04-24 02:03:17.877850] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:33:18.031 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:18.031 Zero copy mechanism will not be used. 00:33:18.031 Running I/O for 60 seconds... 00:33:18.031 [2024-04-24 02:03:17.979984] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:18.031 [2024-04-24 02:03:17.987025] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.031 02:03:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.289 02:03:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:18.289 "name": "raid_bdev1", 00:33:18.289 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:18.289 "strip_size_kb": 0, 00:33:18.289 "state": "online", 00:33:18.289 "raid_level": "raid1", 00:33:18.289 "superblock": false, 00:33:18.289 "num_base_bdevs": 4, 00:33:18.289 "num_base_bdevs_discovered": 3, 00:33:18.289 "num_base_bdevs_operational": 3, 00:33:18.289 "base_bdevs_list": [ 00:33:18.289 { 00:33:18.289 "name": null, 00:33:18.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.289 "is_configured": false, 00:33:18.289 "data_offset": 0, 00:33:18.289 "data_size": 65536 00:33:18.289 }, 00:33:18.289 { 00:33:18.289 "name": "BaseBdev2", 00:33:18.289 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:18.289 "is_configured": true, 00:33:18.289 "data_offset": 0, 00:33:18.289 "data_size": 65536 00:33:18.289 }, 00:33:18.289 { 00:33:18.289 "name": "BaseBdev3", 00:33:18.289 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:18.289 "is_configured": true, 00:33:18.289 "data_offset": 0, 00:33:18.289 "data_size": 65536 00:33:18.289 }, 00:33:18.289 { 00:33:18.289 "name": "BaseBdev4", 00:33:18.289 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:18.289 "is_configured": true, 00:33:18.289 "data_offset": 0, 00:33:18.289 "data_size": 65536 00:33:18.289 } 00:33:18.289 ] 00:33:18.289 }' 00:33:18.289 02:03:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:18.289 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:33:19.226 02:03:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:19.226 [2024-04-24 02:03:19.253403] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:33:19.226 [2024-04-24 02:03:19.253485] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:19.226 02:03:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:33:19.485 [2024-04-24 02:03:19.323916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:19.485 [2024-04-24 02:03:19.326228] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:19.485 [2024-04-24 02:03:19.444767] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:19.485 [2024-04-24 02:03:19.445300] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:19.744 [2024-04-24 02:03:19.573457] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.744 [2024-04-24 02:03:19.573749] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.744 [2024-04-24 02:03:19.813569] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:20.001 [2024-04-24 02:03:19.935676] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:20.257 [2024-04-24 02:03:20.183010] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.257 02:03:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.257 [2024-04-24 02:03:20.311368] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:20.257 [2024-04-24 02:03:20.312086] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:20.514 02:03:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:20.514 "name": "raid_bdev1", 00:33:20.514 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:20.514 "strip_size_kb": 0, 00:33:20.514 "state": "online", 00:33:20.514 "raid_level": "raid1", 00:33:20.514 "superblock": false, 00:33:20.514 "num_base_bdevs": 4, 00:33:20.514 "num_base_bdevs_discovered": 4, 00:33:20.514 "num_base_bdevs_operational": 4, 00:33:20.514 "process": { 00:33:20.514 "type": "rebuild", 00:33:20.514 "target": "spare", 00:33:20.514 "progress": { 00:33:20.514 "blocks": 16384, 00:33:20.514 "percent": 25 00:33:20.514 } 00:33:20.514 }, 00:33:20.514 "base_bdevs_list": [ 00:33:20.514 { 00:33:20.514 "name": "spare", 00:33:20.514 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:20.514 "is_configured": true, 00:33:20.514 "data_offset": 0, 00:33:20.514 "data_size": 65536 00:33:20.514 }, 00:33:20.514 { 00:33:20.514 "name": "BaseBdev2", 00:33:20.514 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:20.514 "is_configured": true, 00:33:20.514 "data_offset": 0, 00:33:20.514 "data_size": 65536 00:33:20.514 }, 00:33:20.514 { 00:33:20.514 "name": "BaseBdev3", 00:33:20.514 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:20.514 "is_configured": true, 00:33:20.514 "data_offset": 0, 00:33:20.514 "data_size": 65536 00:33:20.514 }, 00:33:20.514 { 00:33:20.514 "name": "BaseBdev4", 00:33:20.514 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:20.514 "is_configured": true, 00:33:20.514 "data_offset": 0, 00:33:20.514 "data_size": 65536 00:33:20.514 } 00:33:20.514 ] 00:33:20.514 }' 00:33:20.514 02:03:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:20.770 02:03:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.770 02:03:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:20.770 02:03:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.770 02:03:20 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:20.770 [2024-04-24 02:03:20.679433] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:20.770 [2024-04-24 02:03:20.680867] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:21.027 [2024-04-24 02:03:20.912228] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:21.027 [2024-04-24 02:03:20.915411] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:21.027 [2024-04-24 02:03:20.950094] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:21.027 [2024-04-24 02:03:21.022785] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:21.285 [2024-04-24 02:03:21.125454] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:21.285 [2024-04-24 02:03:21.137881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:21.285 [2024-04-24 02:03:21.166742] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.285 02:03:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.544 02:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:21.544 "name": "raid_bdev1", 00:33:21.544 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:21.544 "strip_size_kb": 0, 00:33:21.544 "state": "online", 00:33:21.544 "raid_level": "raid1", 00:33:21.544 "superblock": false, 00:33:21.544 "num_base_bdevs": 4, 00:33:21.544 "num_base_bdevs_discovered": 3, 00:33:21.544 "num_base_bdevs_operational": 3, 00:33:21.544 "base_bdevs_list": [ 00:33:21.544 { 00:33:21.545 "name": null, 00:33:21.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.545 "is_configured": false, 00:33:21.545 "data_offset": 0, 00:33:21.545 "data_size": 65536 00:33:21.545 }, 00:33:21.545 { 00:33:21.545 "name": "BaseBdev2", 00:33:21.545 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:21.545 "is_configured": true, 00:33:21.545 "data_offset": 0, 00:33:21.545 "data_size": 65536 00:33:21.545 }, 00:33:21.545 { 00:33:21.545 "name": "BaseBdev3", 00:33:21.545 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:21.545 "is_configured": true, 00:33:21.545 "data_offset": 0, 00:33:21.545 "data_size": 65536 00:33:21.545 }, 00:33:21.545 { 00:33:21.545 "name": "BaseBdev4", 00:33:21.545 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:21.545 "is_configured": true, 00:33:21.545 "data_offset": 0, 00:33:21.545 "data_size": 65536 00:33:21.545 } 00:33:21.545 ] 00:33:21.545 }' 00:33:21.545 02:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:21.545 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.112 02:03:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:22.370 "name": "raid_bdev1", 00:33:22.370 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:22.370 "strip_size_kb": 0, 00:33:22.370 "state": "online", 00:33:22.370 "raid_level": "raid1", 00:33:22.370 "superblock": false, 00:33:22.370 "num_base_bdevs": 4, 00:33:22.370 "num_base_bdevs_discovered": 3, 00:33:22.370 "num_base_bdevs_operational": 3, 00:33:22.370 "base_bdevs_list": [ 00:33:22.370 { 00:33:22.370 "name": null, 00:33:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.370 "is_configured": false, 00:33:22.370 "data_offset": 0, 00:33:22.370 "data_size": 65536 00:33:22.370 }, 00:33:22.370 { 00:33:22.370 "name": "BaseBdev2", 00:33:22.370 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:22.370 "is_configured": true, 00:33:22.370 "data_offset": 0, 00:33:22.370 "data_size": 65536 00:33:22.370 }, 00:33:22.370 { 00:33:22.370 "name": "BaseBdev3", 00:33:22.370 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:22.370 "is_configured": true, 00:33:22.370 "data_offset": 0, 00:33:22.370 "data_size": 65536 00:33:22.370 }, 00:33:22.370 { 00:33:22.370 "name": "BaseBdev4", 00:33:22.370 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:22.370 "is_configured": true, 00:33:22.370 "data_offset": 0, 00:33:22.370 "data_size": 65536 00:33:22.370 } 00:33:22.370 ] 00:33:22.370 }' 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:22.370 02:03:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:22.629 [2024-04-24 02:03:22.613762] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:33:22.629 [2024-04-24 02:03:22.613825] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:22.629 [2024-04-24 02:03:22.670954] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:22.629 [2024-04-24 02:03:22.673241] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:22.629 02:03:22 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:33:22.886 [2024-04-24 02:03:22.784361] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:22.886 [2024-04-24 02:03:22.784871] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:23.144 [2024-04-24 02:03:23.006923] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:23.144 [2024-04-24 02:03:23.007226] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:23.401 [2024-04-24 02:03:23.276655] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:23.401 [2024-04-24 02:03:23.278096] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:23.659 [2024-04-24 02:03:23.498027] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.659 02:03:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.918 [2024-04-24 02:03:23.848606] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:23.918 [2024-04-24 02:03:23.849313] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:23.918 02:03:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:23.918 "name": "raid_bdev1", 00:33:23.918 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:23.918 "strip_size_kb": 0, 00:33:23.918 "state": "online", 00:33:23.918 "raid_level": "raid1", 00:33:23.919 "superblock": false, 00:33:23.919 "num_base_bdevs": 4, 00:33:23.919 "num_base_bdevs_discovered": 4, 00:33:23.919 "num_base_bdevs_operational": 4, 00:33:23.919 "process": { 00:33:23.919 "type": "rebuild", 00:33:23.919 "target": "spare", 00:33:23.919 "progress": { 00:33:23.919 "blocks": 16384, 00:33:23.919 "percent": 25 00:33:23.919 } 00:33:23.919 }, 00:33:23.919 "base_bdevs_list": [ 00:33:23.919 { 00:33:23.919 "name": "spare", 00:33:23.919 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:23.919 "is_configured": true, 00:33:23.919 "data_offset": 0, 00:33:23.919 "data_size": 65536 00:33:23.919 }, 00:33:23.919 { 00:33:23.919 "name": "BaseBdev2", 00:33:23.919 "uuid": "79473f11-d71f-4a8c-970d-3e7e631260d0", 00:33:23.919 "is_configured": true, 00:33:23.919 "data_offset": 0, 00:33:23.919 "data_size": 65536 00:33:23.919 }, 00:33:23.919 { 00:33:23.919 "name": "BaseBdev3", 00:33:23.919 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:23.919 "is_configured": true, 00:33:23.919 "data_offset": 0, 00:33:23.919 "data_size": 65536 00:33:23.919 }, 00:33:23.919 { 00:33:23.919 "name": "BaseBdev4", 00:33:23.919 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:23.919 "is_configured": true, 00:33:23.919 "data_offset": 0, 00:33:23.919 "data_size": 65536 00:33:23.919 } 00:33:23.919 ] 00:33:23.919 }' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:33:23.919 02:03:23 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:24.224 [2024-04-24 02:03:24.207969] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:24.224 [2024-04-24 02:03:24.209367] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:24.224 [2024-04-24 02:03:24.236506] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:24.481 [2024-04-24 02:03:24.446679] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:24.481 [2024-04-24 02:03:24.556837] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:33:24.481 [2024-04-24 02:03:24.556895] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005e10 00:33:24.481 [2024-04-24 02:03:24.556945] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.740 02:03:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.740 [2024-04-24 02:03:24.813350] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:33:24.740 [2024-04-24 02:03:24.814258] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:24.998 "name": "raid_bdev1", 00:33:24.998 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:24.998 "strip_size_kb": 0, 00:33:24.998 "state": "online", 00:33:24.998 "raid_level": "raid1", 00:33:24.998 "superblock": false, 00:33:24.998 "num_base_bdevs": 4, 00:33:24.998 "num_base_bdevs_discovered": 3, 00:33:24.998 "num_base_bdevs_operational": 3, 00:33:24.998 "process": { 00:33:24.998 "type": "rebuild", 00:33:24.998 "target": "spare", 00:33:24.998 "progress": { 00:33:24.998 "blocks": 26624, 00:33:24.998 "percent": 40 00:33:24.998 } 00:33:24.998 }, 00:33:24.998 "base_bdevs_list": [ 00:33:24.998 { 00:33:24.998 "name": "spare", 00:33:24.998 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:24.998 "is_configured": true, 00:33:24.998 "data_offset": 0, 00:33:24.998 "data_size": 65536 00:33:24.998 }, 00:33:24.998 { 00:33:24.998 "name": null, 00:33:24.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.998 "is_configured": false, 00:33:24.998 "data_offset": 0, 00:33:24.998 "data_size": 65536 00:33:24.998 }, 00:33:24.998 { 00:33:24.998 "name": "BaseBdev3", 00:33:24.998 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:24.998 "is_configured": true, 00:33:24.998 "data_offset": 0, 00:33:24.998 "data_size": 65536 00:33:24.998 }, 00:33:24.998 { 00:33:24.998 "name": "BaseBdev4", 00:33:24.998 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:24.998 "is_configured": true, 00:33:24.998 "data_offset": 0, 00:33:24.998 "data_size": 65536 00:33:24.998 } 00:33:24.998 ] 00:33:24.998 }' 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@657 -- # local timeout=579 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.998 02:03:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:25.257 "name": "raid_bdev1", 00:33:25.257 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:25.257 "strip_size_kb": 0, 00:33:25.257 "state": "online", 00:33:25.257 "raid_level": "raid1", 00:33:25.257 "superblock": false, 00:33:25.257 "num_base_bdevs": 4, 00:33:25.257 "num_base_bdevs_discovered": 3, 00:33:25.257 "num_base_bdevs_operational": 3, 00:33:25.257 "process": { 00:33:25.257 "type": "rebuild", 00:33:25.257 "target": "spare", 00:33:25.257 "progress": { 00:33:25.257 "blocks": 30720, 00:33:25.257 "percent": 46 00:33:25.257 } 00:33:25.257 }, 00:33:25.257 "base_bdevs_list": [ 00:33:25.257 { 00:33:25.257 "name": "spare", 00:33:25.257 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:25.257 "is_configured": true, 00:33:25.257 "data_offset": 0, 00:33:25.257 "data_size": 65536 00:33:25.257 }, 00:33:25.257 { 00:33:25.257 "name": null, 00:33:25.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.257 "is_configured": false, 00:33:25.257 "data_offset": 0, 00:33:25.257 "data_size": 65536 00:33:25.257 }, 00:33:25.257 { 00:33:25.257 "name": "BaseBdev3", 00:33:25.257 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:25.257 "is_configured": true, 00:33:25.257 "data_offset": 0, 00:33:25.257 "data_size": 65536 00:33:25.257 }, 00:33:25.257 { 00:33:25.257 "name": "BaseBdev4", 00:33:25.257 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:25.257 "is_configured": true, 00:33:25.257 "data_offset": 0, 00:33:25.257 "data_size": 65536 00:33:25.257 } 00:33:25.257 ] 00:33:25.257 }' 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:25.257 02:03:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:25.257 [2024-04-24 02:03:25.336698] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:33:25.824 [2024-04-24 02:03:25.668103] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:25.824 [2024-04-24 02:03:25.675112] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:25.824 [2024-04-24 02:03:25.884806] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:26.391 [2024-04-24 02:03:26.204155] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.391 [2024-04-24 02:03:26.428107] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:26.391 "name": "raid_bdev1", 00:33:26.391 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:26.391 "strip_size_kb": 0, 00:33:26.391 "state": "online", 00:33:26.391 "raid_level": "raid1", 00:33:26.391 "superblock": false, 00:33:26.391 "num_base_bdevs": 4, 00:33:26.391 "num_base_bdevs_discovered": 3, 00:33:26.391 "num_base_bdevs_operational": 3, 00:33:26.391 "process": { 00:33:26.391 "type": "rebuild", 00:33:26.391 "target": "spare", 00:33:26.391 "progress": { 00:33:26.391 "blocks": 47104, 00:33:26.391 "percent": 71 00:33:26.391 } 00:33:26.391 }, 00:33:26.391 "base_bdevs_list": [ 00:33:26.391 { 00:33:26.391 "name": "spare", 00:33:26.391 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:26.391 "is_configured": true, 00:33:26.391 "data_offset": 0, 00:33:26.391 "data_size": 65536 00:33:26.391 }, 00:33:26.391 { 00:33:26.391 "name": null, 00:33:26.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.391 "is_configured": false, 00:33:26.391 "data_offset": 0, 00:33:26.391 "data_size": 65536 00:33:26.391 }, 00:33:26.391 { 00:33:26.391 "name": "BaseBdev3", 00:33:26.391 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:26.391 "is_configured": true, 00:33:26.391 "data_offset": 0, 00:33:26.391 "data_size": 65536 00:33:26.391 }, 00:33:26.391 { 00:33:26.391 "name": "BaseBdev4", 00:33:26.391 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:26.391 "is_configured": true, 00:33:26.391 "data_offset": 0, 00:33:26.391 "data_size": 65536 00:33:26.391 } 00:33:26.391 ] 00:33:26.391 }' 00:33:26.391 02:03:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:26.650 02:03:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.650 02:03:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:26.650 02:03:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.650 02:03:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:27.584 [2024-04-24 02:03:27.534327] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.584 02:03:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.584 [2024-04-24 02:03:27.641101] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:27.584 [2024-04-24 02:03:27.643895] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:27.878 "name": "raid_bdev1", 00:33:27.878 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:27.878 "strip_size_kb": 0, 00:33:27.878 "state": "online", 00:33:27.878 "raid_level": "raid1", 00:33:27.878 "superblock": false, 00:33:27.878 "num_base_bdevs": 4, 00:33:27.878 "num_base_bdevs_discovered": 3, 00:33:27.878 "num_base_bdevs_operational": 3, 00:33:27.878 "base_bdevs_list": [ 00:33:27.878 { 00:33:27.878 "name": "spare", 00:33:27.878 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:27.878 "is_configured": true, 00:33:27.878 "data_offset": 0, 00:33:27.878 "data_size": 65536 00:33:27.878 }, 00:33:27.878 { 00:33:27.878 "name": null, 00:33:27.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.878 "is_configured": false, 00:33:27.878 "data_offset": 0, 00:33:27.878 "data_size": 65536 00:33:27.878 }, 00:33:27.878 { 00:33:27.878 "name": "BaseBdev3", 00:33:27.878 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:27.878 "is_configured": true, 00:33:27.878 "data_offset": 0, 00:33:27.878 "data_size": 65536 00:33:27.878 }, 00:33:27.878 { 00:33:27.878 "name": "BaseBdev4", 00:33:27.878 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:27.878 "is_configured": true, 00:33:27.878 "data_offset": 0, 00:33:27.878 "data_size": 65536 00:33:27.878 } 00:33:27.878 ] 00:33:27.878 }' 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@660 -- # break 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.878 02:03:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:28.155 "name": "raid_bdev1", 00:33:28.155 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:28.155 "strip_size_kb": 0, 00:33:28.155 "state": "online", 00:33:28.155 "raid_level": "raid1", 00:33:28.155 "superblock": false, 00:33:28.155 "num_base_bdevs": 4, 00:33:28.155 "num_base_bdevs_discovered": 3, 00:33:28.155 "num_base_bdevs_operational": 3, 00:33:28.155 "base_bdevs_list": [ 00:33:28.155 { 00:33:28.155 "name": "spare", 00:33:28.155 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:28.155 "is_configured": true, 00:33:28.155 "data_offset": 0, 00:33:28.155 "data_size": 65536 00:33:28.155 }, 00:33:28.155 { 00:33:28.155 "name": null, 00:33:28.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.155 "is_configured": false, 00:33:28.155 "data_offset": 0, 00:33:28.155 "data_size": 65536 00:33:28.155 }, 00:33:28.155 { 00:33:28.155 "name": "BaseBdev3", 00:33:28.155 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:28.155 "is_configured": true, 00:33:28.155 "data_offset": 0, 00:33:28.155 "data_size": 65536 00:33:28.155 }, 00:33:28.155 { 00:33:28.155 "name": "BaseBdev4", 00:33:28.155 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:28.155 "is_configured": true, 00:33:28.155 "data_offset": 0, 00:33:28.155 "data_size": 65536 00:33:28.155 } 00:33:28.155 ] 00:33:28.155 }' 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.155 02:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.413 02:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:28.413 "name": "raid_bdev1", 00:33:28.413 "uuid": "6d2b460a-6441-4902-839b-5d7ce088f070", 00:33:28.413 "strip_size_kb": 0, 00:33:28.413 "state": "online", 00:33:28.413 "raid_level": "raid1", 00:33:28.413 "superblock": false, 00:33:28.413 "num_base_bdevs": 4, 00:33:28.413 "num_base_bdevs_discovered": 3, 00:33:28.413 "num_base_bdevs_operational": 3, 00:33:28.413 "base_bdevs_list": [ 00:33:28.413 { 00:33:28.413 "name": "spare", 00:33:28.413 "uuid": "5baf7642-ce31-5a0c-ad6b-9507a1d9addb", 00:33:28.413 "is_configured": true, 00:33:28.413 "data_offset": 0, 00:33:28.413 "data_size": 65536 00:33:28.413 }, 00:33:28.413 { 00:33:28.413 "name": null, 00:33:28.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.413 "is_configured": false, 00:33:28.413 "data_offset": 0, 00:33:28.413 "data_size": 65536 00:33:28.413 }, 00:33:28.413 { 00:33:28.413 "name": "BaseBdev3", 00:33:28.413 "uuid": "bbbf6cd6-4378-4022-9ffc-19bbff43e199", 00:33:28.413 "is_configured": true, 00:33:28.413 "data_offset": 0, 00:33:28.413 "data_size": 65536 00:33:28.413 }, 00:33:28.413 { 00:33:28.413 "name": "BaseBdev4", 00:33:28.413 "uuid": "71de4b1d-69f7-42f5-9f8b-b5a6892463ae", 00:33:28.413 "is_configured": true, 00:33:28.413 "data_offset": 0, 00:33:28.413 "data_size": 65536 00:33:28.413 } 00:33:28.413 ] 00:33:28.413 }' 00:33:28.413 02:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:28.413 02:03:28 -- common/autotest_common.sh@10 -- # set +x 00:33:28.980 02:03:29 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:29.547 [2024-04-24 02:03:29.336724] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:29.547 [2024-04-24 02:03:29.336767] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.547 00:33:29.547 Latency(us) 00:33:29.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.547 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:29.547 raid_bdev1 : 11.52 94.50 283.49 0.00 0.00 14521.74 343.28 122333.87 00:33:29.547 =================================================================================================================== 00:33:29.547 Total : 94.50 283.49 0.00 0.00 14521.74 343.28 122333.87 00:33:29.547 [2024-04-24 02:03:29.430447] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.547 [2024-04-24 02:03:29.430502] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.547 [2024-04-24 02:03:29.430587] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.547 [2024-04-24 02:03:29.430597] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:33:29.547 0 00:33:29.547 02:03:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:33:29.547 02:03:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.805 02:03:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:33:29.805 02:03:29 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:33:29.805 02:03:29 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@12 -- # local i 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:29.805 02:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:30.064 /dev/nbd0 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:30.064 02:03:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:30.064 02:03:30 -- common/autotest_common.sh@855 -- # local i 00:33:30.064 02:03:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:30.064 02:03:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:30.064 02:03:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:30.064 02:03:30 -- common/autotest_common.sh@859 -- # break 00:33:30.064 02:03:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:30.064 02:03:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:30.064 02:03:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:30.064 1+0 records in 00:33:30.064 1+0 records out 00:33:30.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569914 s, 7.2 MB/s 00:33:30.064 02:03:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.064 02:03:30 -- common/autotest_common.sh@872 -- # size=4096 00:33:30.064 02:03:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.064 02:03:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:30.064 02:03:30 -- common/autotest_common.sh@875 -- # return 0 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@678 -- # continue 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:33:30.064 02:03:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@12 -- # local i 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.064 02:03:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:30.322 /dev/nbd1 00:33:30.322 02:03:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:30.323 02:03:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:30.323 02:03:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:30.323 02:03:30 -- common/autotest_common.sh@855 -- # local i 00:33:30.323 02:03:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:30.323 02:03:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:30.323 02:03:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:30.323 02:03:30 -- common/autotest_common.sh@859 -- # break 00:33:30.323 02:03:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:30.323 02:03:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:30.323 02:03:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:30.323 1+0 records in 00:33:30.323 1+0 records out 00:33:30.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560917 s, 7.3 MB/s 00:33:30.323 02:03:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.323 02:03:30 -- common/autotest_common.sh@872 -- # size=4096 00:33:30.323 02:03:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.323 02:03:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:30.323 02:03:30 -- common/autotest_common.sh@875 -- # return 0 00:33:30.323 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:30.323 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.323 02:03:30 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:30.581 02:03:30 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@51 -- # local i 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:30.581 02:03:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:30.839 02:03:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:30.839 02:03:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:30.839 02:03:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@41 -- # break 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@45 -- # return 0 00:33:30.840 02:03:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:30.840 02:03:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:33:30.840 02:03:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@12 -- # local i 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.840 02:03:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:31.098 /dev/nbd1 00:33:31.098 02:03:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:31.098 02:03:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:31.098 02:03:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:31.098 02:03:31 -- common/autotest_common.sh@855 -- # local i 00:33:31.098 02:03:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:31.098 02:03:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:31.098 02:03:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:31.098 02:03:31 -- common/autotest_common.sh@859 -- # break 00:33:31.098 02:03:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:31.098 02:03:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:31.098 02:03:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:31.098 1+0 records in 00:33:31.098 1+0 records out 00:33:31.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287826 s, 14.2 MB/s 00:33:31.098 02:03:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.098 02:03:31 -- common/autotest_common.sh@872 -- # size=4096 00:33:31.098 02:03:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.098 02:03:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:31.098 02:03:31 -- common/autotest_common.sh@875 -- # return 0 00:33:31.098 02:03:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:31.098 02:03:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:31.098 02:03:31 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:31.363 02:03:31 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@51 -- # local i 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.363 02:03:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@41 -- # break 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.632 02:03:31 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:31.632 02:03:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.633 02:03:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:31.633 02:03:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.633 02:03:31 -- bdev/nbd_common.sh@51 -- # local i 00:33:31.633 02:03:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.633 02:03:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@41 -- # break 00:33:31.891 02:03:31 -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.891 02:03:31 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:33:31.891 02:03:31 -- bdev/bdev_raid.sh@709 -- # killprocess 134718 00:33:31.891 02:03:31 -- common/autotest_common.sh@936 -- # '[' -z 134718 ']' 00:33:31.891 02:03:31 -- common/autotest_common.sh@940 -- # kill -0 134718 00:33:31.891 02:03:31 -- common/autotest_common.sh@941 -- # uname 00:33:31.891 02:03:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:31.891 02:03:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134718 00:33:31.891 02:03:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:31.891 02:03:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:31.891 02:03:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134718' 00:33:31.891 killing process with pid 134718 00:33:31.891 02:03:31 -- common/autotest_common.sh@955 -- # kill 134718 00:33:31.891 Received shutdown signal, test time was about 13.982047 seconds 00:33:31.891 00:33:31.891 Latency(us) 00:33:31.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.892 =================================================================================================================== 00:33:31.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.892 [2024-04-24 02:03:31.862426] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:31.892 02:03:31 -- common/autotest_common.sh@960 -- # wait 134718 00:33:32.457 [2024-04-24 02:03:32.338870] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:33.829 ************************************ 00:33:33.829 END TEST raid_rebuild_test_io 00:33:33.829 ************************************ 00:33:33.829 02:03:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:33:33.829 00:33:33.829 real 0m21.089s 00:33:33.829 user 0m31.858s 00:33:33.829 sys 0m2.959s 00:33:33.829 02:03:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:33.829 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:33:34.087 02:03:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:33:34.087 02:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:34.087 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:33:34.087 ************************************ 00:33:34.087 START TEST raid_rebuild_test_sb_io 00:33:34.087 ************************************ 00:33:34.087 02:03:33 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true true 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:33:34.087 02:03:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=135259 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135259 /var/tmp/spdk-raid.sock 00:33:34.087 02:03:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:34.087 02:03:34 -- common/autotest_common.sh@817 -- # '[' -z 135259 ']' 00:33:34.087 02:03:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:34.087 02:03:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:34.087 02:03:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:34.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:34.088 02:03:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:34.088 02:03:34 -- common/autotest_common.sh@10 -- # set +x 00:33:34.088 [2024-04-24 02:03:34.084282] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:33:34.088 [2024-04-24 02:03:34.084700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135259 ] 00:33:34.088 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:34.088 Zero copy mechanism will not be used. 00:33:34.346 [2024-04-24 02:03:34.265772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.605 [2024-04-24 02:03:34.496662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.864 [2024-04-24 02:03:34.744052] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:34.864 02:03:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:34.864 02:03:34 -- common/autotest_common.sh@850 -- # return 0 00:33:34.864 02:03:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:34.864 02:03:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:33:34.864 02:03:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:35.431 BaseBdev1_malloc 00:33:35.431 02:03:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:35.431 [2024-04-24 02:03:35.437294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:35.431 [2024-04-24 02:03:35.437552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.431 [2024-04-24 02:03:35.437618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:33:35.431 [2024-04-24 02:03:35.437748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.431 [2024-04-24 02:03:35.440235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.431 [2024-04-24 02:03:35.440390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:35.431 BaseBdev1 00:33:35.431 02:03:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:35.431 02:03:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:33:35.431 02:03:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:35.690 BaseBdev2_malloc 00:33:35.950 02:03:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:35.950 [2024-04-24 02:03:36.032112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:35.950 [2024-04-24 02:03:36.032383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.950 [2024-04-24 02:03:36.032463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:35.950 [2024-04-24 02:03:36.032599] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.950 [2024-04-24 02:03:36.035180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.950 [2024-04-24 02:03:36.035352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:36.210 BaseBdev2 00:33:36.210 02:03:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:36.210 02:03:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:33:36.210 02:03:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:36.468 BaseBdev3_malloc 00:33:36.468 02:03:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:36.727 [2024-04-24 02:03:36.593463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:36.727 [2024-04-24 02:03:36.593746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.728 [2024-04-24 02:03:36.593819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:33:36.728 [2024-04-24 02:03:36.593967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.728 [2024-04-24 02:03:36.596580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.728 [2024-04-24 02:03:36.596744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:36.728 BaseBdev3 00:33:36.728 02:03:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:33:36.728 02:03:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:33:36.728 02:03:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:36.986 BaseBdev4_malloc 00:33:36.986 02:03:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:37.242 [2024-04-24 02:03:37.107755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:37.242 [2024-04-24 02:03:37.107972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.242 [2024-04-24 02:03:37.108057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:33:37.242 [2024-04-24 02:03:37.108199] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.242 [2024-04-24 02:03:37.110646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.242 [2024-04-24 02:03:37.110828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:37.242 BaseBdev4 00:33:37.242 02:03:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:37.505 spare_malloc 00:33:37.505 02:03:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:37.763 spare_delay 00:33:37.763 02:03:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:37.763 [2024-04-24 02:03:37.778122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:37.763 [2024-04-24 02:03:37.778377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.763 [2024-04-24 02:03:37.778451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:37.763 [2024-04-24 02:03:37.778564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.763 [2024-04-24 02:03:37.780891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.763 [2024-04-24 02:03:37.781056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:37.763 spare 00:33:37.763 02:03:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:38.020 [2024-04-24 02:03:37.970192] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:38.020 [2024-04-24 02:03:37.972290] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:38.020 [2024-04-24 02:03:37.972486] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:38.020 [2024-04-24 02:03:37.972652] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:38.020 [2024-04-24 02:03:37.972933] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:33:38.020 [2024-04-24 02:03:37.973022] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:38.020 [2024-04-24 02:03:37.973186] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:38.020 [2024-04-24 02:03:37.973602] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:33:38.020 [2024-04-24 02:03:37.973696] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:33:38.020 [2024-04-24 02:03:37.973934] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.020 02:03:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.278 02:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:38.278 "name": "raid_bdev1", 00:33:38.278 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:38.278 "strip_size_kb": 0, 00:33:38.278 "state": "online", 00:33:38.278 "raid_level": "raid1", 00:33:38.278 "superblock": true, 00:33:38.278 "num_base_bdevs": 4, 00:33:38.278 "num_base_bdevs_discovered": 4, 00:33:38.278 "num_base_bdevs_operational": 4, 00:33:38.278 "base_bdevs_list": [ 00:33:38.278 { 00:33:38.278 "name": "BaseBdev1", 00:33:38.278 "uuid": "5f71ded5-6498-5bfd-9cb7-bd36f339878c", 00:33:38.278 "is_configured": true, 00:33:38.278 "data_offset": 2048, 00:33:38.278 "data_size": 63488 00:33:38.278 }, 00:33:38.278 { 00:33:38.278 "name": "BaseBdev2", 00:33:38.278 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:38.278 "is_configured": true, 00:33:38.278 "data_offset": 2048, 00:33:38.278 "data_size": 63488 00:33:38.278 }, 00:33:38.278 { 00:33:38.278 "name": "BaseBdev3", 00:33:38.278 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:38.278 "is_configured": true, 00:33:38.278 "data_offset": 2048, 00:33:38.278 "data_size": 63488 00:33:38.278 }, 00:33:38.278 { 00:33:38.278 "name": "BaseBdev4", 00:33:38.278 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:38.278 "is_configured": true, 00:33:38.278 "data_offset": 2048, 00:33:38.278 "data_size": 63488 00:33:38.278 } 00:33:38.278 ] 00:33:38.278 }' 00:33:38.278 02:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:38.278 02:03:38 -- common/autotest_common.sh@10 -- # set +x 00:33:38.843 02:03:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:38.843 02:03:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:33:39.102 [2024-04-24 02:03:39.134684] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:39.102 02:03:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:33:39.102 02:03:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.102 02:03:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:39.359 02:03:39 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:33:39.359 02:03:39 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:33:39.359 02:03:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:39.359 02:03:39 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:33:39.618 [2024-04-24 02:03:39.531880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:39.618 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:39.618 Zero copy mechanism will not be used. 00:33:39.618 Running I/O for 60 seconds... 00:33:39.618 [2024-04-24 02:03:39.645175] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:39.618 [2024-04-24 02:03:39.651181] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.618 02:03:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.185 02:03:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:40.185 "name": "raid_bdev1", 00:33:40.185 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:40.185 "strip_size_kb": 0, 00:33:40.185 "state": "online", 00:33:40.185 "raid_level": "raid1", 00:33:40.185 "superblock": true, 00:33:40.185 "num_base_bdevs": 4, 00:33:40.185 "num_base_bdevs_discovered": 3, 00:33:40.185 "num_base_bdevs_operational": 3, 00:33:40.185 "base_bdevs_list": [ 00:33:40.185 { 00:33:40.185 "name": null, 00:33:40.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.185 "is_configured": false, 00:33:40.185 "data_offset": 2048, 00:33:40.185 "data_size": 63488 00:33:40.185 }, 00:33:40.185 { 00:33:40.185 "name": "BaseBdev2", 00:33:40.185 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:40.185 "is_configured": true, 00:33:40.185 "data_offset": 2048, 00:33:40.185 "data_size": 63488 00:33:40.185 }, 00:33:40.185 { 00:33:40.185 "name": "BaseBdev3", 00:33:40.185 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:40.185 "is_configured": true, 00:33:40.185 "data_offset": 2048, 00:33:40.185 "data_size": 63488 00:33:40.185 }, 00:33:40.185 { 00:33:40.185 "name": "BaseBdev4", 00:33:40.185 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:40.185 "is_configured": true, 00:33:40.185 "data_offset": 2048, 00:33:40.185 "data_size": 63488 00:33:40.185 } 00:33:40.185 ] 00:33:40.185 }' 00:33:40.185 02:03:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:40.185 02:03:39 -- common/autotest_common.sh@10 -- # set +x 00:33:40.751 02:03:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:40.751 [2024-04-24 02:03:40.819717] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:33:40.751 [2024-04-24 02:03:40.819923] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:41.008 [2024-04-24 02:03:40.869520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:41.008 [2024-04-24 02:03:40.871946] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:41.008 02:03:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:33:41.008 [2024-04-24 02:03:40.991530] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:41.008 [2024-04-24 02:03:40.992269] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:41.266 [2024-04-24 02:03:41.206133] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:41.266 [2024-04-24 02:03:41.207094] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:41.832 [2024-04-24 02:03:41.684655] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:41.832 [2024-04-24 02:03:41.685624] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.832 02:03:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.091 [2024-04-24 02:03:42.019934] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:42.091 [2024-04-24 02:03:42.140187] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:42.091 [2024-04-24 02:03:42.141081] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:42.349 02:03:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:42.349 "name": "raid_bdev1", 00:33:42.349 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:42.349 "strip_size_kb": 0, 00:33:42.349 "state": "online", 00:33:42.349 "raid_level": "raid1", 00:33:42.349 "superblock": true, 00:33:42.349 "num_base_bdevs": 4, 00:33:42.349 "num_base_bdevs_discovered": 4, 00:33:42.349 "num_base_bdevs_operational": 4, 00:33:42.349 "process": { 00:33:42.349 "type": "rebuild", 00:33:42.349 "target": "spare", 00:33:42.349 "progress": { 00:33:42.349 "blocks": 16384, 00:33:42.349 "percent": 25 00:33:42.349 } 00:33:42.349 }, 00:33:42.349 "base_bdevs_list": [ 00:33:42.349 { 00:33:42.349 "name": "spare", 00:33:42.350 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:42.350 "is_configured": true, 00:33:42.350 "data_offset": 2048, 00:33:42.350 "data_size": 63488 00:33:42.350 }, 00:33:42.350 { 00:33:42.350 "name": "BaseBdev2", 00:33:42.350 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:42.350 "is_configured": true, 00:33:42.350 "data_offset": 2048, 00:33:42.350 "data_size": 63488 00:33:42.350 }, 00:33:42.350 { 00:33:42.350 "name": "BaseBdev3", 00:33:42.350 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:42.350 "is_configured": true, 00:33:42.350 "data_offset": 2048, 00:33:42.350 "data_size": 63488 00:33:42.350 }, 00:33:42.350 { 00:33:42.350 "name": "BaseBdev4", 00:33:42.350 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:42.350 "is_configured": true, 00:33:42.350 "data_offset": 2048, 00:33:42.350 "data_size": 63488 00:33:42.350 } 00:33:42.350 ] 00:33:42.350 }' 00:33:42.350 02:03:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:42.350 02:03:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:42.350 02:03:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:42.350 02:03:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:42.350 02:03:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:42.607 [2024-04-24 02:03:42.545984] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:42.607 [2024-04-24 02:03:42.643046] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:42.607 [2024-04-24 02:03:42.643919] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:42.865 [2024-04-24 02:03:42.754665] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:42.865 [2024-04-24 02:03:42.758887] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.865 [2024-04-24 02:03:42.794678] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.865 02:03:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.123 02:03:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:43.123 "name": "raid_bdev1", 00:33:43.123 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:43.123 "strip_size_kb": 0, 00:33:43.123 "state": "online", 00:33:43.123 "raid_level": "raid1", 00:33:43.123 "superblock": true, 00:33:43.123 "num_base_bdevs": 4, 00:33:43.123 "num_base_bdevs_discovered": 3, 00:33:43.123 "num_base_bdevs_operational": 3, 00:33:43.123 "base_bdevs_list": [ 00:33:43.123 { 00:33:43.123 "name": null, 00:33:43.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.123 "is_configured": false, 00:33:43.123 "data_offset": 2048, 00:33:43.123 "data_size": 63488 00:33:43.123 }, 00:33:43.123 { 00:33:43.123 "name": "BaseBdev2", 00:33:43.123 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:43.123 "is_configured": true, 00:33:43.123 "data_offset": 2048, 00:33:43.123 "data_size": 63488 00:33:43.123 }, 00:33:43.123 { 00:33:43.123 "name": "BaseBdev3", 00:33:43.123 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:43.123 "is_configured": true, 00:33:43.123 "data_offset": 2048, 00:33:43.123 "data_size": 63488 00:33:43.123 }, 00:33:43.123 { 00:33:43.123 "name": "BaseBdev4", 00:33:43.123 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:43.123 "is_configured": true, 00:33:43.123 "data_offset": 2048, 00:33:43.123 "data_size": 63488 00:33:43.123 } 00:33:43.123 ] 00:33:43.123 }' 00:33:43.123 02:03:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:43.123 02:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.058 02:03:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.058 02:03:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:44.058 "name": "raid_bdev1", 00:33:44.058 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:44.058 "strip_size_kb": 0, 00:33:44.058 "state": "online", 00:33:44.058 "raid_level": "raid1", 00:33:44.058 "superblock": true, 00:33:44.058 "num_base_bdevs": 4, 00:33:44.058 "num_base_bdevs_discovered": 3, 00:33:44.058 "num_base_bdevs_operational": 3, 00:33:44.058 "base_bdevs_list": [ 00:33:44.058 { 00:33:44.058 "name": null, 00:33:44.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.058 "is_configured": false, 00:33:44.058 "data_offset": 2048, 00:33:44.058 "data_size": 63488 00:33:44.058 }, 00:33:44.058 { 00:33:44.058 "name": "BaseBdev2", 00:33:44.058 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:44.058 "is_configured": true, 00:33:44.058 "data_offset": 2048, 00:33:44.058 "data_size": 63488 00:33:44.058 }, 00:33:44.058 { 00:33:44.058 "name": "BaseBdev3", 00:33:44.058 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:44.058 "is_configured": true, 00:33:44.058 "data_offset": 2048, 00:33:44.058 "data_size": 63488 00:33:44.058 }, 00:33:44.058 { 00:33:44.058 "name": "BaseBdev4", 00:33:44.058 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:44.058 "is_configured": true, 00:33:44.058 "data_offset": 2048, 00:33:44.058 "data_size": 63488 00:33:44.058 } 00:33:44.058 ] 00:33:44.058 }' 00:33:44.058 02:03:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:44.316 02:03:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:44.316 02:03:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:44.316 02:03:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:44.316 02:03:44 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:44.575 [2024-04-24 02:03:44.480905] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:33:44.575 [2024-04-24 02:03:44.481176] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:44.575 [2024-04-24 02:03:44.538255] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:44.575 [2024-04-24 02:03:44.540848] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:44.575 02:03:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:33:44.833 [2024-04-24 02:03:44.670877] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:44.833 [2024-04-24 02:03:44.672444] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:44.833 [2024-04-24 02:03:44.904332] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:44.833 [2024-04-24 02:03:44.904838] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:45.092 [2024-04-24 02:03:45.158268] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:45.351 [2024-04-24 02:03:45.270199] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:45.611 [2024-04-24 02:03:45.536778] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.611 02:03:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.869 [2024-04-24 02:03:45.757241] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:45.870 [2024-04-24 02:03:45.758213] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:45.870 "name": "raid_bdev1", 00:33:45.870 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:45.870 "strip_size_kb": 0, 00:33:45.870 "state": "online", 00:33:45.870 "raid_level": "raid1", 00:33:45.870 "superblock": true, 00:33:45.870 "num_base_bdevs": 4, 00:33:45.870 "num_base_bdevs_discovered": 4, 00:33:45.870 "num_base_bdevs_operational": 4, 00:33:45.870 "process": { 00:33:45.870 "type": "rebuild", 00:33:45.870 "target": "spare", 00:33:45.870 "progress": { 00:33:45.870 "blocks": 16384, 00:33:45.870 "percent": 25 00:33:45.870 } 00:33:45.870 }, 00:33:45.870 "base_bdevs_list": [ 00:33:45.870 { 00:33:45.870 "name": "spare", 00:33:45.870 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:45.870 "is_configured": true, 00:33:45.870 "data_offset": 2048, 00:33:45.870 "data_size": 63488 00:33:45.870 }, 00:33:45.870 { 00:33:45.870 "name": "BaseBdev2", 00:33:45.870 "uuid": "577a7c6e-2dc9-5a92-adb8-0831038c920a", 00:33:45.870 "is_configured": true, 00:33:45.870 "data_offset": 2048, 00:33:45.870 "data_size": 63488 00:33:45.870 }, 00:33:45.870 { 00:33:45.870 "name": "BaseBdev3", 00:33:45.870 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:45.870 "is_configured": true, 00:33:45.870 "data_offset": 2048, 00:33:45.870 "data_size": 63488 00:33:45.870 }, 00:33:45.870 { 00:33:45.870 "name": "BaseBdev4", 00:33:45.870 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:45.870 "is_configured": true, 00:33:45.870 "data_offset": 2048, 00:33:45.870 "data_size": 63488 00:33:45.870 } 00:33:45.870 ] 00:33:45.870 }' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:33:45.870 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:33:45.870 02:03:45 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:46.129 [2024-04-24 02:03:46.108676] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:46.129 [2024-04-24 02:03:46.110247] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:46.129 [2024-04-24 02:03:46.157270] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:46.387 [2024-04-24 02:03:46.339012] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:46.387 [2024-04-24 02:03:46.455445] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:33:46.387 [2024-04-24 02:03:46.455705] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:33:46.387 [2024-04-24 02:03:46.458500] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.646 02:03:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:46.904 "name": "raid_bdev1", 00:33:46.904 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:46.904 "strip_size_kb": 0, 00:33:46.904 "state": "online", 00:33:46.904 "raid_level": "raid1", 00:33:46.904 "superblock": true, 00:33:46.904 "num_base_bdevs": 4, 00:33:46.904 "num_base_bdevs_discovered": 3, 00:33:46.904 "num_base_bdevs_operational": 3, 00:33:46.904 "process": { 00:33:46.904 "type": "rebuild", 00:33:46.904 "target": "spare", 00:33:46.904 "progress": { 00:33:46.904 "blocks": 26624, 00:33:46.904 "percent": 41 00:33:46.904 } 00:33:46.904 }, 00:33:46.904 "base_bdevs_list": [ 00:33:46.904 { 00:33:46.904 "name": "spare", 00:33:46.904 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:46.904 "is_configured": true, 00:33:46.904 "data_offset": 2048, 00:33:46.904 "data_size": 63488 00:33:46.904 }, 00:33:46.904 { 00:33:46.904 "name": null, 00:33:46.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.904 "is_configured": false, 00:33:46.904 "data_offset": 2048, 00:33:46.904 "data_size": 63488 00:33:46.904 }, 00:33:46.904 { 00:33:46.904 "name": "BaseBdev3", 00:33:46.904 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:46.904 "is_configured": true, 00:33:46.904 "data_offset": 2048, 00:33:46.904 "data_size": 63488 00:33:46.904 }, 00:33:46.904 { 00:33:46.904 "name": "BaseBdev4", 00:33:46.904 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:46.904 "is_configured": true, 00:33:46.904 "data_offset": 2048, 00:33:46.904 "data_size": 63488 00:33:46.904 } 00:33:46.904 ] 00:33:46.904 }' 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:46.904 [2024-04-24 02:03:46.831467] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@657 -- # local timeout=601 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.904 02:03:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.162 [2024-04-24 02:03:47.164383] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:33:47.162 02:03:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:47.162 "name": "raid_bdev1", 00:33:47.162 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:47.162 "strip_size_kb": 0, 00:33:47.162 "state": "online", 00:33:47.162 "raid_level": "raid1", 00:33:47.162 "superblock": true, 00:33:47.162 "num_base_bdevs": 4, 00:33:47.162 "num_base_bdevs_discovered": 3, 00:33:47.162 "num_base_bdevs_operational": 3, 00:33:47.162 "process": { 00:33:47.162 "type": "rebuild", 00:33:47.162 "target": "spare", 00:33:47.162 "progress": { 00:33:47.162 "blocks": 32768, 00:33:47.162 "percent": 51 00:33:47.162 } 00:33:47.162 }, 00:33:47.162 "base_bdevs_list": [ 00:33:47.162 { 00:33:47.162 "name": "spare", 00:33:47.162 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:47.162 "is_configured": true, 00:33:47.162 "data_offset": 2048, 00:33:47.162 "data_size": 63488 00:33:47.162 }, 00:33:47.162 { 00:33:47.162 "name": null, 00:33:47.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.162 "is_configured": false, 00:33:47.162 "data_offset": 2048, 00:33:47.162 "data_size": 63488 00:33:47.162 }, 00:33:47.162 { 00:33:47.162 "name": "BaseBdev3", 00:33:47.162 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:47.162 "is_configured": true, 00:33:47.163 "data_offset": 2048, 00:33:47.163 "data_size": 63488 00:33:47.163 }, 00:33:47.163 { 00:33:47.163 "name": "BaseBdev4", 00:33:47.163 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:47.163 "is_configured": true, 00:33:47.163 "data_offset": 2048, 00:33:47.163 "data_size": 63488 00:33:47.163 } 00:33:47.163 ] 00:33:47.163 }' 00:33:47.163 02:03:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:47.422 02:03:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:47.422 02:03:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:47.422 02:03:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:47.422 02:03:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:47.681 [2024-04-24 02:03:47.712632] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:47.681 [2024-04-24 02:03:47.713139] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:48.265 02:03:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:48.265 02:03:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.266 02:03:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.523 [2024-04-24 02:03:48.381844] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:33:48.523 02:03:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:48.523 "name": "raid_bdev1", 00:33:48.523 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:48.523 "strip_size_kb": 0, 00:33:48.523 "state": "online", 00:33:48.523 "raid_level": "raid1", 00:33:48.523 "superblock": true, 00:33:48.523 "num_base_bdevs": 4, 00:33:48.523 "num_base_bdevs_discovered": 3, 00:33:48.523 "num_base_bdevs_operational": 3, 00:33:48.523 "process": { 00:33:48.524 "type": "rebuild", 00:33:48.524 "target": "spare", 00:33:48.524 "progress": { 00:33:48.524 "blocks": 55296, 00:33:48.524 "percent": 87 00:33:48.524 } 00:33:48.524 }, 00:33:48.524 "base_bdevs_list": [ 00:33:48.524 { 00:33:48.524 "name": "spare", 00:33:48.524 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:48.524 "is_configured": true, 00:33:48.524 "data_offset": 2048, 00:33:48.524 "data_size": 63488 00:33:48.524 }, 00:33:48.524 { 00:33:48.524 "name": null, 00:33:48.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.524 "is_configured": false, 00:33:48.524 "data_offset": 2048, 00:33:48.524 "data_size": 63488 00:33:48.524 }, 00:33:48.524 { 00:33:48.524 "name": "BaseBdev3", 00:33:48.524 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:48.524 "is_configured": true, 00:33:48.524 "data_offset": 2048, 00:33:48.524 "data_size": 63488 00:33:48.524 }, 00:33:48.524 { 00:33:48.524 "name": "BaseBdev4", 00:33:48.524 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:48.524 "is_configured": true, 00:33:48.524 "data_offset": 2048, 00:33:48.524 "data_size": 63488 00:33:48.524 } 00:33:48.524 ] 00:33:48.524 }' 00:33:48.524 02:03:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:48.782 02:03:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.782 02:03:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:48.782 02:03:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.782 02:03:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:49.041 [2024-04-24 02:03:48.953288] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:49.041 [2024-04-24 02:03:49.059651] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:49.041 [2024-04-24 02:03:49.064094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.607 02:03:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.173 02:03:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:50.173 "name": "raid_bdev1", 00:33:50.173 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:50.173 "strip_size_kb": 0, 00:33:50.173 "state": "online", 00:33:50.173 "raid_level": "raid1", 00:33:50.173 "superblock": true, 00:33:50.173 "num_base_bdevs": 4, 00:33:50.173 "num_base_bdevs_discovered": 3, 00:33:50.173 "num_base_bdevs_operational": 3, 00:33:50.173 "base_bdevs_list": [ 00:33:50.173 { 00:33:50.173 "name": "spare", 00:33:50.173 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:50.173 "is_configured": true, 00:33:50.173 "data_offset": 2048, 00:33:50.173 "data_size": 63488 00:33:50.173 }, 00:33:50.173 { 00:33:50.173 "name": null, 00:33:50.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.173 "is_configured": false, 00:33:50.173 "data_offset": 2048, 00:33:50.173 "data_size": 63488 00:33:50.173 }, 00:33:50.173 { 00:33:50.173 "name": "BaseBdev3", 00:33:50.173 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:50.173 "is_configured": true, 00:33:50.173 "data_offset": 2048, 00:33:50.173 "data_size": 63488 00:33:50.173 }, 00:33:50.173 { 00:33:50.173 "name": "BaseBdev4", 00:33:50.173 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:50.173 "is_configured": true, 00:33:50.173 "data_offset": 2048, 00:33:50.173 "data_size": 63488 00:33:50.173 } 00:33:50.173 ] 00:33:50.173 }' 00:33:50.173 02:03:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@660 -- # break 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.173 02:03:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:50.432 "name": "raid_bdev1", 00:33:50.432 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:50.432 "strip_size_kb": 0, 00:33:50.432 "state": "online", 00:33:50.432 "raid_level": "raid1", 00:33:50.432 "superblock": true, 00:33:50.432 "num_base_bdevs": 4, 00:33:50.432 "num_base_bdevs_discovered": 3, 00:33:50.432 "num_base_bdevs_operational": 3, 00:33:50.432 "base_bdevs_list": [ 00:33:50.432 { 00:33:50.432 "name": "spare", 00:33:50.432 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:50.432 "is_configured": true, 00:33:50.432 "data_offset": 2048, 00:33:50.432 "data_size": 63488 00:33:50.432 }, 00:33:50.432 { 00:33:50.432 "name": null, 00:33:50.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.432 "is_configured": false, 00:33:50.432 "data_offset": 2048, 00:33:50.432 "data_size": 63488 00:33:50.432 }, 00:33:50.432 { 00:33:50.432 "name": "BaseBdev3", 00:33:50.432 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:50.432 "is_configured": true, 00:33:50.432 "data_offset": 2048, 00:33:50.432 "data_size": 63488 00:33:50.432 }, 00:33:50.432 { 00:33:50.432 "name": "BaseBdev4", 00:33:50.432 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:50.432 "is_configured": true, 00:33:50.432 "data_offset": 2048, 00:33:50.432 "data_size": 63488 00:33:50.432 } 00:33:50.432 ] 00:33:50.432 }' 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.432 02:03:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.999 02:03:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:50.999 "name": "raid_bdev1", 00:33:50.999 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:50.999 "strip_size_kb": 0, 00:33:50.999 "state": "online", 00:33:50.999 "raid_level": "raid1", 00:33:50.999 "superblock": true, 00:33:50.999 "num_base_bdevs": 4, 00:33:50.999 "num_base_bdevs_discovered": 3, 00:33:50.999 "num_base_bdevs_operational": 3, 00:33:50.999 "base_bdevs_list": [ 00:33:50.999 { 00:33:50.999 "name": "spare", 00:33:50.999 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:50.999 "is_configured": true, 00:33:50.999 "data_offset": 2048, 00:33:50.999 "data_size": 63488 00:33:50.999 }, 00:33:50.999 { 00:33:50.999 "name": null, 00:33:50.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.999 "is_configured": false, 00:33:50.999 "data_offset": 2048, 00:33:50.999 "data_size": 63488 00:33:50.999 }, 00:33:50.999 { 00:33:50.999 "name": "BaseBdev3", 00:33:50.999 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:50.999 "is_configured": true, 00:33:50.999 "data_offset": 2048, 00:33:50.999 "data_size": 63488 00:33:50.999 }, 00:33:50.999 { 00:33:50.999 "name": "BaseBdev4", 00:33:50.999 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:50.999 "is_configured": true, 00:33:50.999 "data_offset": 2048, 00:33:50.999 "data_size": 63488 00:33:50.999 } 00:33:50.999 ] 00:33:50.999 }' 00:33:50.999 02:03:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:50.999 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:33:51.568 02:03:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:51.825 [2024-04-24 02:03:51.763467] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:51.825 [2024-04-24 02:03:51.763709] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:51.825 00:33:51.826 Latency(us) 00:33:51.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.826 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:51.826 raid_bdev1 : 12.26 102.00 306.00 0.00 0.00 13906.73 347.18 120835.90 00:33:51.826 =================================================================================================================== 00:33:51.826 Total : 102.00 306.00 0.00 0.00 13906.73 347.18 120835.90 00:33:51.826 [2024-04-24 02:03:51.827055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.826 [2024-04-24 02:03:51.827300] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:51.826 [2024-04-24 02:03:51.827445] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:51.826 [2024-04-24 02:03:51.827595] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:33:51.826 0 00:33:51.826 02:03:51 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.826 02:03:51 -- bdev/bdev_raid.sh@671 -- # jq length 00:33:52.083 02:03:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:33:52.083 02:03:52 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:33:52.083 02:03:52 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@12 -- # local i 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.083 02:03:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:52.649 /dev/nbd0 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.649 02:03:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:52.649 02:03:52 -- common/autotest_common.sh@855 -- # local i 00:33:52.649 02:03:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:52.649 02:03:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:52.649 02:03:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:52.649 02:03:52 -- common/autotest_common.sh@859 -- # break 00:33:52.649 02:03:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:52.649 02:03:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:52.649 02:03:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.649 1+0 records in 00:33:52.649 1+0 records out 00:33:52.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621559 s, 6.6 MB/s 00:33:52.649 02:03:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.649 02:03:52 -- common/autotest_common.sh@872 -- # size=4096 00:33:52.649 02:03:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.649 02:03:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:52.649 02:03:52 -- common/autotest_common.sh@875 -- # return 0 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@678 -- # continue 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:33:52.649 02:03:52 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@12 -- # local i 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.649 02:03:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:52.908 /dev/nbd1 00:33:52.908 02:03:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:52.908 02:03:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:52.908 02:03:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:52.908 02:03:52 -- common/autotest_common.sh@855 -- # local i 00:33:52.908 02:03:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:52.908 02:03:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:52.908 02:03:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:52.908 02:03:52 -- common/autotest_common.sh@859 -- # break 00:33:52.908 02:03:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:52.908 02:03:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:52.909 02:03:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.909 1+0 records in 00:33:52.909 1+0 records out 00:33:52.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584799 s, 7.0 MB/s 00:33:52.909 02:03:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.909 02:03:52 -- common/autotest_common.sh@872 -- # size=4096 00:33:52.909 02:03:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.909 02:03:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:52.909 02:03:52 -- common/autotest_common.sh@875 -- # return 0 00:33:52.909 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.909 02:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.909 02:03:52 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:53.167 02:03:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@51 -- # local i 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.167 02:03:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@41 -- # break 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.427 02:03:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:33:53.427 02:03:53 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:33:53.427 02:03:53 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@12 -- # local i 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:53.427 02:03:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:53.686 /dev/nbd1 00:33:53.686 02:03:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:53.686 02:03:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:53.686 02:03:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:53.686 02:03:53 -- common/autotest_common.sh@855 -- # local i 00:33:53.686 02:03:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:53.686 02:03:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:53.686 02:03:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:53.686 02:03:53 -- common/autotest_common.sh@859 -- # break 00:33:53.686 02:03:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:53.686 02:03:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:53.686 02:03:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:53.686 1+0 records in 00:33:53.686 1+0 records out 00:33:53.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690716 s, 5.9 MB/s 00:33:53.686 02:03:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:53.686 02:03:53 -- common/autotest_common.sh@872 -- # size=4096 00:33:53.686 02:03:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:53.686 02:03:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:53.686 02:03:53 -- common/autotest_common.sh@875 -- # return 0 00:33:53.686 02:03:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:53.686 02:03:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:53.686 02:03:53 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:53.944 02:03:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@51 -- # local i 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.944 02:03:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@41 -- # break 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@45 -- # return 0 00:33:54.203 02:03:54 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@51 -- # local i 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:54.203 02:03:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@41 -- # break 00:33:54.461 02:03:54 -- bdev/nbd_common.sh@45 -- # return 0 00:33:54.461 02:03:54 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:33:54.461 02:03:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:54.461 02:03:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:33:54.461 02:03:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:55.027 02:03:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:55.027 [2024-04-24 02:03:55.080542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:55.027 [2024-04-24 02:03:55.080903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.027 [2024-04-24 02:03:55.081003] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:55.027 [2024-04-24 02:03:55.081266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.027 [2024-04-24 02:03:55.084382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.027 [2024-04-24 02:03:55.084629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:55.027 [2024-04-24 02:03:55.084885] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:55.027 [2024-04-24 02:03:55.085063] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:55.027 BaseBdev1 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@696 -- # continue 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:33:55.027 02:03:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:33:55.361 02:03:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:55.620 [2024-04-24 02:03:55.673162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:55.620 [2024-04-24 02:03:55.673448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.620 [2024-04-24 02:03:55.673532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:55.620 [2024-04-24 02:03:55.673651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.620 [2024-04-24 02:03:55.674228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.620 [2024-04-24 02:03:55.674423] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:55.620 [2024-04-24 02:03:55.674694] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:33:55.620 [2024-04-24 02:03:55.674803] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:33:55.620 [2024-04-24 02:03:55.674885] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:55.620 [2024-04-24 02:03:55.674985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:33:55.620 [2024-04-24 02:03:55.675139] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:55.620 BaseBdev3 00:33:55.620 02:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:33:55.620 02:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:33:55.620 02:03:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:33:56.186 02:03:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:56.186 [2024-04-24 02:03:56.265360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:56.186 [2024-04-24 02:03:56.265660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:56.186 [2024-04-24 02:03:56.265798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:56.186 [2024-04-24 02:03:56.265899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:56.186 [2024-04-24 02:03:56.266535] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:56.186 [2024-04-24 02:03:56.266720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:56.186 [2024-04-24 02:03:56.266957] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:33:56.186 [2024-04-24 02:03:56.267074] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:56.186 BaseBdev4 00:33:56.443 02:03:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:56.701 02:03:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:56.957 [2024-04-24 02:03:56.833596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:56.957 [2024-04-24 02:03:56.833936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:56.957 [2024-04-24 02:03:56.834016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:56.957 [2024-04-24 02:03:56.834196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:56.957 [2024-04-24 02:03:56.834848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:56.957 [2024-04-24 02:03:56.835049] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:56.957 [2024-04-24 02:03:56.835280] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:33:56.957 [2024-04-24 02:03:56.835394] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:56.957 spare 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.957 02:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.957 [2024-04-24 02:03:56.935559] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:33:56.957 [2024-04-24 02:03:56.935780] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:56.957 [2024-04-24 02:03:56.936046] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:33:56.957 [2024-04-24 02:03:56.936677] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:33:56.957 [2024-04-24 02:03:56.936796] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:33:56.957 [2024-04-24 02:03:56.937093] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:57.214 02:03:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:57.214 "name": "raid_bdev1", 00:33:57.214 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:57.214 "strip_size_kb": 0, 00:33:57.214 "state": "online", 00:33:57.214 "raid_level": "raid1", 00:33:57.214 "superblock": true, 00:33:57.214 "num_base_bdevs": 4, 00:33:57.214 "num_base_bdevs_discovered": 3, 00:33:57.214 "num_base_bdevs_operational": 3, 00:33:57.214 "base_bdevs_list": [ 00:33:57.214 { 00:33:57.214 "name": "spare", 00:33:57.214 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:57.214 "is_configured": true, 00:33:57.214 "data_offset": 2048, 00:33:57.214 "data_size": 63488 00:33:57.214 }, 00:33:57.214 { 00:33:57.214 "name": null, 00:33:57.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.214 "is_configured": false, 00:33:57.214 "data_offset": 2048, 00:33:57.214 "data_size": 63488 00:33:57.214 }, 00:33:57.214 { 00:33:57.214 "name": "BaseBdev3", 00:33:57.214 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:57.214 "is_configured": true, 00:33:57.214 "data_offset": 2048, 00:33:57.214 "data_size": 63488 00:33:57.214 }, 00:33:57.214 { 00:33:57.214 "name": "BaseBdev4", 00:33:57.214 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:57.214 "is_configured": true, 00:33:57.214 "data_offset": 2048, 00:33:57.214 "data_size": 63488 00:33:57.214 } 00:33:57.214 ] 00:33:57.214 }' 00:33:57.214 02:03:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:57.214 02:03:57 -- common/autotest_common.sh@10 -- # set +x 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.781 02:03:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.040 02:03:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:33:58.040 "name": "raid_bdev1", 00:33:58.040 "uuid": "c5f5af2e-ca75-409b-82f4-3c88e811932c", 00:33:58.040 "strip_size_kb": 0, 00:33:58.040 "state": "online", 00:33:58.040 "raid_level": "raid1", 00:33:58.040 "superblock": true, 00:33:58.040 "num_base_bdevs": 4, 00:33:58.040 "num_base_bdevs_discovered": 3, 00:33:58.040 "num_base_bdevs_operational": 3, 00:33:58.040 "base_bdevs_list": [ 00:33:58.040 { 00:33:58.040 "name": "spare", 00:33:58.040 "uuid": "286f22e4-a3d6-504b-8683-38634f024158", 00:33:58.040 "is_configured": true, 00:33:58.040 "data_offset": 2048, 00:33:58.040 "data_size": 63488 00:33:58.040 }, 00:33:58.040 { 00:33:58.040 "name": null, 00:33:58.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:58.040 "is_configured": false, 00:33:58.040 "data_offset": 2048, 00:33:58.040 "data_size": 63488 00:33:58.040 }, 00:33:58.040 { 00:33:58.040 "name": "BaseBdev3", 00:33:58.041 "uuid": "33e199c2-5ec2-561c-bf3b-a610b849be6a", 00:33:58.041 "is_configured": true, 00:33:58.041 "data_offset": 2048, 00:33:58.041 "data_size": 63488 00:33:58.041 }, 00:33:58.041 { 00:33:58.041 "name": "BaseBdev4", 00:33:58.041 "uuid": "14b8d89f-e9de-584e-8694-7425569fc9d1", 00:33:58.041 "is_configured": true, 00:33:58.041 "data_offset": 2048, 00:33:58.041 "data_size": 63488 00:33:58.041 } 00:33:58.041 ] 00:33:58.041 }' 00:33:58.041 02:03:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:33:58.041 02:03:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:58.041 02:03:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:33:58.041 02:03:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:33:58.041 02:03:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:58.041 02:03:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.357 02:03:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:33:58.357 02:03:58 -- bdev/bdev_raid.sh@709 -- # killprocess 135259 00:33:58.358 02:03:58 -- common/autotest_common.sh@936 -- # '[' -z 135259 ']' 00:33:58.358 02:03:58 -- common/autotest_common.sh@940 -- # kill -0 135259 00:33:58.358 02:03:58 -- common/autotest_common.sh@941 -- # uname 00:33:58.358 02:03:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:58.358 02:03:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135259 00:33:58.358 02:03:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:58.358 02:03:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:58.358 02:03:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135259' 00:33:58.358 killing process with pid 135259 00:33:58.358 02:03:58 -- common/autotest_common.sh@955 -- # kill 135259 00:33:58.358 Received shutdown signal, test time was about 18.766363 seconds 00:33:58.358 00:33:58.358 Latency(us) 00:33:58.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.358 =================================================================================================================== 00:33:58.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:58.358 02:03:58 -- common/autotest_common.sh@960 -- # wait 135259 00:33:58.358 [2024-04-24 02:03:58.301072] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:58.358 [2024-04-24 02:03:58.301177] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:58.358 [2024-04-24 02:03:58.301272] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:58.358 [2024-04-24 02:03:58.301284] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:33:58.924 [2024-04-24 02:03:58.757444] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:34:00.301 00:34:00.301 real 0m26.210s 00:34:00.301 user 0m41.507s 00:34:00.301 sys 0m3.654s 00:34:00.301 02:04:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:00.301 ************************************ 00:34:00.301 END TEST raid_rebuild_test_sb_io 00:34:00.301 ************************************ 00:34:00.301 02:04:00 -- common/autotest_common.sh@10 -- # set +x 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:34:00.301 02:04:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:00.301 02:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:00.301 02:04:00 -- common/autotest_common.sh@10 -- # set +x 00:34:00.301 ************************************ 00:34:00.301 START TEST raid5f_state_function_test 00:34:00.301 ************************************ 00:34:00.301 02:04:00 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 false 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=135911 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135911' 00:34:00.301 Process raid pid: 135911 00:34:00.301 02:04:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135911 /var/tmp/spdk-raid.sock 00:34:00.301 02:04:00 -- common/autotest_common.sh@817 -- # '[' -z 135911 ']' 00:34:00.301 02:04:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:00.301 02:04:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:00.301 02:04:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:00.301 02:04:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:00.301 02:04:00 -- common/autotest_common.sh@10 -- # set +x 00:34:00.560 [2024-04-24 02:04:00.402120] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:34:00.560 [2024-04-24 02:04:00.402470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.560 [2024-04-24 02:04:00.561665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.819 [2024-04-24 02:04:00.781295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.076 [2024-04-24 02:04:01.029889] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:01.641 02:04:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:01.641 02:04:01 -- common/autotest_common.sh@850 -- # return 0 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:01.641 [2024-04-24 02:04:01.691621] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:01.641 [2024-04-24 02:04:01.691895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:01.641 [2024-04-24 02:04:01.692027] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:01.641 [2024-04-24 02:04:01.692090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:01.641 [2024-04-24 02:04:01.692273] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:01.641 [2024-04-24 02:04:01.692359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.641 02:04:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:02.207 02:04:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:02.207 "name": "Existed_Raid", 00:34:02.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.207 "strip_size_kb": 64, 00:34:02.207 "state": "configuring", 00:34:02.207 "raid_level": "raid5f", 00:34:02.207 "superblock": false, 00:34:02.207 "num_base_bdevs": 3, 00:34:02.207 "num_base_bdevs_discovered": 0, 00:34:02.207 "num_base_bdevs_operational": 3, 00:34:02.207 "base_bdevs_list": [ 00:34:02.207 { 00:34:02.207 "name": "BaseBdev1", 00:34:02.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.207 "is_configured": false, 00:34:02.207 "data_offset": 0, 00:34:02.207 "data_size": 0 00:34:02.207 }, 00:34:02.207 { 00:34:02.207 "name": "BaseBdev2", 00:34:02.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.207 "is_configured": false, 00:34:02.207 "data_offset": 0, 00:34:02.207 "data_size": 0 00:34:02.207 }, 00:34:02.207 { 00:34:02.207 "name": "BaseBdev3", 00:34:02.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.207 "is_configured": false, 00:34:02.207 "data_offset": 0, 00:34:02.207 "data_size": 0 00:34:02.207 } 00:34:02.207 ] 00:34:02.207 }' 00:34:02.207 02:04:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:02.207 02:04:02 -- common/autotest_common.sh@10 -- # set +x 00:34:02.789 02:04:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:03.047 [2024-04-24 02:04:03.007742] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:03.047 [2024-04-24 02:04:03.007984] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:34:03.047 02:04:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:03.305 [2024-04-24 02:04:03.283814] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:03.305 [2024-04-24 02:04:03.284058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:03.305 [2024-04-24 02:04:03.284259] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:03.305 [2024-04-24 02:04:03.284377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:03.305 [2024-04-24 02:04:03.284474] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:03.305 [2024-04-24 02:04:03.284580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:03.305 02:04:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:03.564 [2024-04-24 02:04:03.566803] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:03.564 BaseBdev1 00:34:03.564 02:04:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:03.564 02:04:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:03.564 02:04:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:03.564 02:04:03 -- common/autotest_common.sh@887 -- # local i 00:34:03.564 02:04:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:03.564 02:04:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:03.564 02:04:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:04.174 02:04:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:04.174 [ 00:34:04.174 { 00:34:04.174 "name": "BaseBdev1", 00:34:04.174 "aliases": [ 00:34:04.174 "219429d3-2ee4-4c50-b969-6092aacfef78" 00:34:04.174 ], 00:34:04.174 "product_name": "Malloc disk", 00:34:04.174 "block_size": 512, 00:34:04.174 "num_blocks": 65536, 00:34:04.174 "uuid": "219429d3-2ee4-4c50-b969-6092aacfef78", 00:34:04.174 "assigned_rate_limits": { 00:34:04.174 "rw_ios_per_sec": 0, 00:34:04.174 "rw_mbytes_per_sec": 0, 00:34:04.174 "r_mbytes_per_sec": 0, 00:34:04.174 "w_mbytes_per_sec": 0 00:34:04.174 }, 00:34:04.174 "claimed": true, 00:34:04.174 "claim_type": "exclusive_write", 00:34:04.174 "zoned": false, 00:34:04.174 "supported_io_types": { 00:34:04.174 "read": true, 00:34:04.174 "write": true, 00:34:04.174 "unmap": true, 00:34:04.174 "write_zeroes": true, 00:34:04.174 "flush": true, 00:34:04.174 "reset": true, 00:34:04.174 "compare": false, 00:34:04.174 "compare_and_write": false, 00:34:04.174 "abort": true, 00:34:04.174 "nvme_admin": false, 00:34:04.174 "nvme_io": false 00:34:04.174 }, 00:34:04.174 "memory_domains": [ 00:34:04.174 { 00:34:04.174 "dma_device_id": "system", 00:34:04.174 "dma_device_type": 1 00:34:04.174 }, 00:34:04.174 { 00:34:04.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.174 "dma_device_type": 2 00:34:04.174 } 00:34:04.174 ], 00:34:04.174 "driver_specific": {} 00:34:04.174 } 00:34:04.174 ] 00:34:04.174 02:04:04 -- common/autotest_common.sh@893 -- # return 0 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:04.174 02:04:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.434 02:04:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:04.434 "name": "Existed_Raid", 00:34:04.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.434 "strip_size_kb": 64, 00:34:04.434 "state": "configuring", 00:34:04.434 "raid_level": "raid5f", 00:34:04.434 "superblock": false, 00:34:04.434 "num_base_bdevs": 3, 00:34:04.434 "num_base_bdevs_discovered": 1, 00:34:04.434 "num_base_bdevs_operational": 3, 00:34:04.434 "base_bdevs_list": [ 00:34:04.434 { 00:34:04.434 "name": "BaseBdev1", 00:34:04.434 "uuid": "219429d3-2ee4-4c50-b969-6092aacfef78", 00:34:04.434 "is_configured": true, 00:34:04.434 "data_offset": 0, 00:34:04.434 "data_size": 65536 00:34:04.434 }, 00:34:04.434 { 00:34:04.434 "name": "BaseBdev2", 00:34:04.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.434 "is_configured": false, 00:34:04.434 "data_offset": 0, 00:34:04.434 "data_size": 0 00:34:04.434 }, 00:34:04.434 { 00:34:04.434 "name": "BaseBdev3", 00:34:04.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.434 "is_configured": false, 00:34:04.434 "data_offset": 0, 00:34:04.434 "data_size": 0 00:34:04.434 } 00:34:04.434 ] 00:34:04.434 }' 00:34:04.434 02:04:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:04.434 02:04:04 -- common/autotest_common.sh@10 -- # set +x 00:34:05.367 02:04:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:05.367 [2024-04-24 02:04:05.395277] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:05.367 [2024-04-24 02:04:05.395496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:34:05.367 02:04:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:34:05.367 02:04:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:05.626 [2024-04-24 02:04:05.607376] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:05.626 [2024-04-24 02:04:05.609749] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:05.626 [2024-04-24 02:04:05.609961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:05.626 [2024-04-24 02:04:05.610065] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:05.626 [2024-04-24 02:04:05.610130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.626 02:04:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.884 02:04:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:05.884 "name": "Existed_Raid", 00:34:05.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.884 "strip_size_kb": 64, 00:34:05.884 "state": "configuring", 00:34:05.884 "raid_level": "raid5f", 00:34:05.884 "superblock": false, 00:34:05.885 "num_base_bdevs": 3, 00:34:05.885 "num_base_bdevs_discovered": 1, 00:34:05.885 "num_base_bdevs_operational": 3, 00:34:05.885 "base_bdevs_list": [ 00:34:05.885 { 00:34:05.885 "name": "BaseBdev1", 00:34:05.885 "uuid": "219429d3-2ee4-4c50-b969-6092aacfef78", 00:34:05.885 "is_configured": true, 00:34:05.885 "data_offset": 0, 00:34:05.885 "data_size": 65536 00:34:05.885 }, 00:34:05.885 { 00:34:05.885 "name": "BaseBdev2", 00:34:05.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.885 "is_configured": false, 00:34:05.885 "data_offset": 0, 00:34:05.885 "data_size": 0 00:34:05.885 }, 00:34:05.885 { 00:34:05.885 "name": "BaseBdev3", 00:34:05.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.885 "is_configured": false, 00:34:05.885 "data_offset": 0, 00:34:05.885 "data_size": 0 00:34:05.885 } 00:34:05.885 ] 00:34:05.885 }' 00:34:05.885 02:04:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:05.885 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:34:06.818 02:04:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:07.076 [2024-04-24 02:04:06.997260] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:07.076 BaseBdev2 00:34:07.076 02:04:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:07.076 02:04:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:34:07.076 02:04:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:07.076 02:04:07 -- common/autotest_common.sh@887 -- # local i 00:34:07.076 02:04:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:07.076 02:04:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:07.076 02:04:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:07.333 02:04:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:07.592 [ 00:34:07.592 { 00:34:07.592 "name": "BaseBdev2", 00:34:07.592 "aliases": [ 00:34:07.592 "77a679f5-522a-4c29-877a-20b4063a4360" 00:34:07.592 ], 00:34:07.592 "product_name": "Malloc disk", 00:34:07.592 "block_size": 512, 00:34:07.592 "num_blocks": 65536, 00:34:07.592 "uuid": "77a679f5-522a-4c29-877a-20b4063a4360", 00:34:07.592 "assigned_rate_limits": { 00:34:07.592 "rw_ios_per_sec": 0, 00:34:07.592 "rw_mbytes_per_sec": 0, 00:34:07.592 "r_mbytes_per_sec": 0, 00:34:07.592 "w_mbytes_per_sec": 0 00:34:07.592 }, 00:34:07.592 "claimed": true, 00:34:07.592 "claim_type": "exclusive_write", 00:34:07.592 "zoned": false, 00:34:07.592 "supported_io_types": { 00:34:07.592 "read": true, 00:34:07.592 "write": true, 00:34:07.592 "unmap": true, 00:34:07.592 "write_zeroes": true, 00:34:07.592 "flush": true, 00:34:07.592 "reset": true, 00:34:07.592 "compare": false, 00:34:07.592 "compare_and_write": false, 00:34:07.592 "abort": true, 00:34:07.592 "nvme_admin": false, 00:34:07.592 "nvme_io": false 00:34:07.592 }, 00:34:07.592 "memory_domains": [ 00:34:07.592 { 00:34:07.592 "dma_device_id": "system", 00:34:07.592 "dma_device_type": 1 00:34:07.592 }, 00:34:07.592 { 00:34:07.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.592 "dma_device_type": 2 00:34:07.592 } 00:34:07.592 ], 00:34:07.592 "driver_specific": {} 00:34:07.592 } 00:34:07.592 ] 00:34:07.592 02:04:07 -- common/autotest_common.sh@893 -- # return 0 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.592 02:04:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.850 02:04:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:07.850 "name": "Existed_Raid", 00:34:07.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.850 "strip_size_kb": 64, 00:34:07.850 "state": "configuring", 00:34:07.850 "raid_level": "raid5f", 00:34:07.850 "superblock": false, 00:34:07.850 "num_base_bdevs": 3, 00:34:07.850 "num_base_bdevs_discovered": 2, 00:34:07.850 "num_base_bdevs_operational": 3, 00:34:07.850 "base_bdevs_list": [ 00:34:07.850 { 00:34:07.850 "name": "BaseBdev1", 00:34:07.850 "uuid": "219429d3-2ee4-4c50-b969-6092aacfef78", 00:34:07.850 "is_configured": true, 00:34:07.850 "data_offset": 0, 00:34:07.850 "data_size": 65536 00:34:07.850 }, 00:34:07.850 { 00:34:07.850 "name": "BaseBdev2", 00:34:07.850 "uuid": "77a679f5-522a-4c29-877a-20b4063a4360", 00:34:07.850 "is_configured": true, 00:34:07.850 "data_offset": 0, 00:34:07.850 "data_size": 65536 00:34:07.850 }, 00:34:07.850 { 00:34:07.850 "name": "BaseBdev3", 00:34:07.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.850 "is_configured": false, 00:34:07.850 "data_offset": 0, 00:34:07.850 "data_size": 0 00:34:07.850 } 00:34:07.850 ] 00:34:07.850 }' 00:34:07.850 02:04:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:07.850 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:34:08.784 02:04:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:09.042 [2024-04-24 02:04:08.917353] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:09.042 [2024-04-24 02:04:08.917640] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:34:09.042 [2024-04-24 02:04:08.917694] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:09.042 [2024-04-24 02:04:08.917949] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:34:09.042 [2024-04-24 02:04:08.925555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:34:09.042 [2024-04-24 02:04:08.925715] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:34:09.042 [2024-04-24 02:04:08.926148] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.043 BaseBdev3 00:34:09.043 02:04:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:34:09.043 02:04:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:34:09.043 02:04:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:09.043 02:04:08 -- common/autotest_common.sh@887 -- # local i 00:34:09.043 02:04:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:09.043 02:04:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:09.043 02:04:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:09.301 02:04:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:09.621 [ 00:34:09.621 { 00:34:09.621 "name": "BaseBdev3", 00:34:09.621 "aliases": [ 00:34:09.621 "051e4aec-0b10-420a-87af-28ed36eb1e4c" 00:34:09.621 ], 00:34:09.621 "product_name": "Malloc disk", 00:34:09.621 "block_size": 512, 00:34:09.621 "num_blocks": 65536, 00:34:09.621 "uuid": "051e4aec-0b10-420a-87af-28ed36eb1e4c", 00:34:09.621 "assigned_rate_limits": { 00:34:09.621 "rw_ios_per_sec": 0, 00:34:09.621 "rw_mbytes_per_sec": 0, 00:34:09.621 "r_mbytes_per_sec": 0, 00:34:09.621 "w_mbytes_per_sec": 0 00:34:09.621 }, 00:34:09.621 "claimed": true, 00:34:09.621 "claim_type": "exclusive_write", 00:34:09.621 "zoned": false, 00:34:09.621 "supported_io_types": { 00:34:09.621 "read": true, 00:34:09.621 "write": true, 00:34:09.621 "unmap": true, 00:34:09.621 "write_zeroes": true, 00:34:09.621 "flush": true, 00:34:09.621 "reset": true, 00:34:09.621 "compare": false, 00:34:09.621 "compare_and_write": false, 00:34:09.621 "abort": true, 00:34:09.621 "nvme_admin": false, 00:34:09.621 "nvme_io": false 00:34:09.621 }, 00:34:09.621 "memory_domains": [ 00:34:09.621 { 00:34:09.621 "dma_device_id": "system", 00:34:09.621 "dma_device_type": 1 00:34:09.621 }, 00:34:09.621 { 00:34:09.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:09.622 "dma_device_type": 2 00:34:09.622 } 00:34:09.622 ], 00:34:09.622 "driver_specific": {} 00:34:09.622 } 00:34:09.622 ] 00:34:09.622 02:04:09 -- common/autotest_common.sh@893 -- # return 0 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.622 02:04:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:09.893 02:04:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:09.893 "name": "Existed_Raid", 00:34:09.893 "uuid": "f40e08ea-f74e-4ed7-a980-66efd5a07fa8", 00:34:09.893 "strip_size_kb": 64, 00:34:09.893 "state": "online", 00:34:09.893 "raid_level": "raid5f", 00:34:09.893 "superblock": false, 00:34:09.893 "num_base_bdevs": 3, 00:34:09.893 "num_base_bdevs_discovered": 3, 00:34:09.893 "num_base_bdevs_operational": 3, 00:34:09.893 "base_bdevs_list": [ 00:34:09.893 { 00:34:09.893 "name": "BaseBdev1", 00:34:09.893 "uuid": "219429d3-2ee4-4c50-b969-6092aacfef78", 00:34:09.893 "is_configured": true, 00:34:09.893 "data_offset": 0, 00:34:09.893 "data_size": 65536 00:34:09.893 }, 00:34:09.893 { 00:34:09.893 "name": "BaseBdev2", 00:34:09.893 "uuid": "77a679f5-522a-4c29-877a-20b4063a4360", 00:34:09.893 "is_configured": true, 00:34:09.893 "data_offset": 0, 00:34:09.893 "data_size": 65536 00:34:09.893 }, 00:34:09.893 { 00:34:09.893 "name": "BaseBdev3", 00:34:09.893 "uuid": "051e4aec-0b10-420a-87af-28ed36eb1e4c", 00:34:09.893 "is_configured": true, 00:34:09.893 "data_offset": 0, 00:34:09.893 "data_size": 65536 00:34:09.893 } 00:34:09.893 ] 00:34:09.893 }' 00:34:09.893 02:04:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:09.893 02:04:09 -- common/autotest_common.sh@10 -- # set +x 00:34:10.461 02:04:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:10.719 [2024-04-24 02:04:10.613586] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.719 02:04:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:10.979 02:04:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:10.979 "name": "Existed_Raid", 00:34:10.979 "uuid": "f40e08ea-f74e-4ed7-a980-66efd5a07fa8", 00:34:10.979 "strip_size_kb": 64, 00:34:10.979 "state": "online", 00:34:10.979 "raid_level": "raid5f", 00:34:10.979 "superblock": false, 00:34:10.979 "num_base_bdevs": 3, 00:34:10.979 "num_base_bdevs_discovered": 2, 00:34:10.979 "num_base_bdevs_operational": 2, 00:34:10.979 "base_bdevs_list": [ 00:34:10.979 { 00:34:10.979 "name": null, 00:34:10.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.979 "is_configured": false, 00:34:10.979 "data_offset": 0, 00:34:10.979 "data_size": 65536 00:34:10.979 }, 00:34:10.979 { 00:34:10.979 "name": "BaseBdev2", 00:34:10.979 "uuid": "77a679f5-522a-4c29-877a-20b4063a4360", 00:34:10.979 "is_configured": true, 00:34:10.979 "data_offset": 0, 00:34:10.979 "data_size": 65536 00:34:10.979 }, 00:34:10.979 { 00:34:10.979 "name": "BaseBdev3", 00:34:10.979 "uuid": "051e4aec-0b10-420a-87af-28ed36eb1e4c", 00:34:10.979 "is_configured": true, 00:34:10.979 "data_offset": 0, 00:34:10.979 "data_size": 65536 00:34:10.979 } 00:34:10.979 ] 00:34:10.979 }' 00:34:10.979 02:04:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:10.979 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:34:11.546 02:04:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:11.546 02:04:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:11.546 02:04:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.546 02:04:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:11.804 02:04:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:11.804 02:04:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:11.804 02:04:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:12.369 [2024-04-24 02:04:12.164806] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:12.369 [2024-04-24 02:04:12.165130] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:12.369 [2024-04-24 02:04:12.268971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:12.369 02:04:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:12.369 02:04:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:12.369 02:04:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:12.369 02:04:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.627 02:04:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:12.627 02:04:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:12.627 02:04:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:12.887 [2024-04-24 02:04:12.817329] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:12.887 [2024-04-24 02:04:12.817647] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:34:13.144 02:04:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:13.144 02:04:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:13.144 02:04:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:13.144 02:04:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.402 02:04:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:13.402 02:04:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:13.402 02:04:13 -- bdev/bdev_raid.sh@287 -- # killprocess 135911 00:34:13.402 02:04:13 -- common/autotest_common.sh@936 -- # '[' -z 135911 ']' 00:34:13.402 02:04:13 -- common/autotest_common.sh@940 -- # kill -0 135911 00:34:13.402 02:04:13 -- common/autotest_common.sh@941 -- # uname 00:34:13.402 02:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:13.402 02:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135911 00:34:13.402 02:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:13.402 02:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:13.402 02:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135911' 00:34:13.402 killing process with pid 135911 00:34:13.402 02:04:13 -- common/autotest_common.sh@955 -- # kill 135911 00:34:13.402 [2024-04-24 02:04:13.297231] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:13.402 02:04:13 -- common/autotest_common.sh@960 -- # wait 135911 00:34:13.402 [2024-04-24 02:04:13.297509] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:14.777 ************************************ 00:34:14.777 END TEST raid5f_state_function_test 00:34:14.777 ************************************ 00:34:14.777 02:04:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:14.777 00:34:14.777 real 0m14.477s 00:34:14.777 user 0m24.996s 00:34:14.777 sys 0m1.862s 00:34:14.777 02:04:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:14.777 02:04:14 -- common/autotest_common.sh@10 -- # set +x 00:34:14.777 02:04:14 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:34:14.777 02:04:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:14.777 02:04:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:14.777 02:04:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.035 ************************************ 00:34:15.035 START TEST raid5f_state_function_test_sb 00:34:15.035 ************************************ 00:34:15.035 02:04:14 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 true 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=136311 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 136311' 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:15.035 Process raid pid: 136311 00:34:15.035 02:04:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 136311 /var/tmp/spdk-raid.sock 00:34:15.035 02:04:14 -- common/autotest_common.sh@817 -- # '[' -z 136311 ']' 00:34:15.035 02:04:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:15.035 02:04:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:15.035 02:04:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:15.035 02:04:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:15.035 02:04:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.035 [2024-04-24 02:04:15.006548] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:34:15.035 [2024-04-24 02:04:15.006936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.293 [2024-04-24 02:04:15.186761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.583 [2024-04-24 02:04:15.472668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.861 [2024-04-24 02:04:15.720046] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.119 02:04:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:16.119 02:04:16 -- common/autotest_common.sh@850 -- # return 0 00:34:16.119 02:04:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:16.377 [2024-04-24 02:04:16.251847] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:16.377 [2024-04-24 02:04:16.252100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:16.377 [2024-04-24 02:04:16.252251] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:16.377 [2024-04-24 02:04:16.252314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:16.377 [2024-04-24 02:04:16.252403] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:16.377 [2024-04-24 02:04:16.252478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:16.377 02:04:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.635 02:04:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:16.635 "name": "Existed_Raid", 00:34:16.635 "uuid": "8eb9f3fe-7dfb-444e-b56d-86f45ebd5a40", 00:34:16.635 "strip_size_kb": 64, 00:34:16.635 "state": "configuring", 00:34:16.635 "raid_level": "raid5f", 00:34:16.635 "superblock": true, 00:34:16.635 "num_base_bdevs": 3, 00:34:16.635 "num_base_bdevs_discovered": 0, 00:34:16.635 "num_base_bdevs_operational": 3, 00:34:16.635 "base_bdevs_list": [ 00:34:16.635 { 00:34:16.635 "name": "BaseBdev1", 00:34:16.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.635 "is_configured": false, 00:34:16.635 "data_offset": 0, 00:34:16.635 "data_size": 0 00:34:16.635 }, 00:34:16.635 { 00:34:16.635 "name": "BaseBdev2", 00:34:16.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.635 "is_configured": false, 00:34:16.635 "data_offset": 0, 00:34:16.635 "data_size": 0 00:34:16.635 }, 00:34:16.635 { 00:34:16.635 "name": "BaseBdev3", 00:34:16.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.635 "is_configured": false, 00:34:16.635 "data_offset": 0, 00:34:16.635 "data_size": 0 00:34:16.635 } 00:34:16.635 ] 00:34:16.635 }' 00:34:16.635 02:04:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:16.635 02:04:16 -- common/autotest_common.sh@10 -- # set +x 00:34:17.202 02:04:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:17.460 [2024-04-24 02:04:17.407989] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:17.460 [2024-04-24 02:04:17.408257] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:34:17.460 02:04:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:17.719 [2024-04-24 02:04:17.612061] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:17.719 [2024-04-24 02:04:17.612335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:17.719 [2024-04-24 02:04:17.612440] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:17.719 [2024-04-24 02:04:17.612497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:17.719 [2024-04-24 02:04:17.612528] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:17.719 [2024-04-24 02:04:17.612631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:17.719 02:04:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:17.977 [2024-04-24 02:04:17.882363] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:17.977 BaseBdev1 00:34:17.978 02:04:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:17.978 02:04:17 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:17.978 02:04:17 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:17.978 02:04:17 -- common/autotest_common.sh@887 -- # local i 00:34:17.978 02:04:17 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:17.978 02:04:17 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:17.978 02:04:17 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:18.265 02:04:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:18.523 [ 00:34:18.523 { 00:34:18.523 "name": "BaseBdev1", 00:34:18.523 "aliases": [ 00:34:18.523 "3be1b4c9-9d83-4d98-813c-a6e4b089e9ce" 00:34:18.523 ], 00:34:18.523 "product_name": "Malloc disk", 00:34:18.523 "block_size": 512, 00:34:18.523 "num_blocks": 65536, 00:34:18.523 "uuid": "3be1b4c9-9d83-4d98-813c-a6e4b089e9ce", 00:34:18.523 "assigned_rate_limits": { 00:34:18.523 "rw_ios_per_sec": 0, 00:34:18.523 "rw_mbytes_per_sec": 0, 00:34:18.523 "r_mbytes_per_sec": 0, 00:34:18.523 "w_mbytes_per_sec": 0 00:34:18.523 }, 00:34:18.523 "claimed": true, 00:34:18.523 "claim_type": "exclusive_write", 00:34:18.523 "zoned": false, 00:34:18.523 "supported_io_types": { 00:34:18.523 "read": true, 00:34:18.523 "write": true, 00:34:18.523 "unmap": true, 00:34:18.523 "write_zeroes": true, 00:34:18.523 "flush": true, 00:34:18.523 "reset": true, 00:34:18.523 "compare": false, 00:34:18.523 "compare_and_write": false, 00:34:18.523 "abort": true, 00:34:18.523 "nvme_admin": false, 00:34:18.523 "nvme_io": false 00:34:18.523 }, 00:34:18.523 "memory_domains": [ 00:34:18.523 { 00:34:18.523 "dma_device_id": "system", 00:34:18.523 "dma_device_type": 1 00:34:18.523 }, 00:34:18.523 { 00:34:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.523 "dma_device_type": 2 00:34:18.523 } 00:34:18.523 ], 00:34:18.523 "driver_specific": {} 00:34:18.523 } 00:34:18.523 ] 00:34:18.523 02:04:18 -- common/autotest_common.sh@893 -- # return 0 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.523 02:04:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:18.781 02:04:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:18.781 "name": "Existed_Raid", 00:34:18.781 "uuid": "a86807ab-0c52-45e7-a25c-09c698ce11de", 00:34:18.781 "strip_size_kb": 64, 00:34:18.781 "state": "configuring", 00:34:18.781 "raid_level": "raid5f", 00:34:18.781 "superblock": true, 00:34:18.781 "num_base_bdevs": 3, 00:34:18.781 "num_base_bdevs_discovered": 1, 00:34:18.781 "num_base_bdevs_operational": 3, 00:34:18.781 "base_bdevs_list": [ 00:34:18.781 { 00:34:18.781 "name": "BaseBdev1", 00:34:18.781 "uuid": "3be1b4c9-9d83-4d98-813c-a6e4b089e9ce", 00:34:18.781 "is_configured": true, 00:34:18.781 "data_offset": 2048, 00:34:18.781 "data_size": 63488 00:34:18.781 }, 00:34:18.781 { 00:34:18.781 "name": "BaseBdev2", 00:34:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.781 "is_configured": false, 00:34:18.781 "data_offset": 0, 00:34:18.781 "data_size": 0 00:34:18.781 }, 00:34:18.781 { 00:34:18.781 "name": "BaseBdev3", 00:34:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.781 "is_configured": false, 00:34:18.781 "data_offset": 0, 00:34:18.781 "data_size": 0 00:34:18.781 } 00:34:18.781 ] 00:34:18.781 }' 00:34:18.781 02:04:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:18.781 02:04:18 -- common/autotest_common.sh@10 -- # set +x 00:34:19.715 02:04:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:19.715 [2024-04-24 02:04:19.758845] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:19.715 [2024-04-24 02:04:19.759100] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:34:19.715 02:04:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:34:19.715 02:04:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:20.279 02:04:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:20.538 BaseBdev1 00:34:20.538 02:04:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:34:20.538 02:04:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:20.538 02:04:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:20.538 02:04:20 -- common/autotest_common.sh@887 -- # local i 00:34:20.538 02:04:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:20.538 02:04:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:20.538 02:04:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:20.797 02:04:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:21.055 [ 00:34:21.055 { 00:34:21.055 "name": "BaseBdev1", 00:34:21.055 "aliases": [ 00:34:21.055 "af37d783-2ab4-497d-a329-1fcf0617dbcc" 00:34:21.055 ], 00:34:21.055 "product_name": "Malloc disk", 00:34:21.055 "block_size": 512, 00:34:21.055 "num_blocks": 65536, 00:34:21.055 "uuid": "af37d783-2ab4-497d-a329-1fcf0617dbcc", 00:34:21.055 "assigned_rate_limits": { 00:34:21.055 "rw_ios_per_sec": 0, 00:34:21.055 "rw_mbytes_per_sec": 0, 00:34:21.055 "r_mbytes_per_sec": 0, 00:34:21.055 "w_mbytes_per_sec": 0 00:34:21.055 }, 00:34:21.055 "claimed": false, 00:34:21.055 "zoned": false, 00:34:21.055 "supported_io_types": { 00:34:21.055 "read": true, 00:34:21.055 "write": true, 00:34:21.055 "unmap": true, 00:34:21.055 "write_zeroes": true, 00:34:21.055 "flush": true, 00:34:21.055 "reset": true, 00:34:21.055 "compare": false, 00:34:21.055 "compare_and_write": false, 00:34:21.055 "abort": true, 00:34:21.055 "nvme_admin": false, 00:34:21.055 "nvme_io": false 00:34:21.055 }, 00:34:21.055 "memory_domains": [ 00:34:21.055 { 00:34:21.055 "dma_device_id": "system", 00:34:21.055 "dma_device_type": 1 00:34:21.055 }, 00:34:21.055 { 00:34:21.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:21.055 "dma_device_type": 2 00:34:21.055 } 00:34:21.055 ], 00:34:21.055 "driver_specific": {} 00:34:21.055 } 00:34:21.055 ] 00:34:21.055 02:04:20 -- common/autotest_common.sh@893 -- # return 0 00:34:21.055 02:04:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:21.055 [2024-04-24 02:04:21.112383] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:21.055 [2024-04-24 02:04:21.114680] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:21.055 [2024-04-24 02:04:21.114882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:21.055 [2024-04-24 02:04:21.114994] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:21.055 [2024-04-24 02:04:21.115055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.055 02:04:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.634 02:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:21.634 "name": "Existed_Raid", 00:34:21.634 "uuid": "6ef9246d-a615-4e07-aeb9-58ee5dda2613", 00:34:21.634 "strip_size_kb": 64, 00:34:21.634 "state": "configuring", 00:34:21.634 "raid_level": "raid5f", 00:34:21.634 "superblock": true, 00:34:21.634 "num_base_bdevs": 3, 00:34:21.634 "num_base_bdevs_discovered": 1, 00:34:21.634 "num_base_bdevs_operational": 3, 00:34:21.634 "base_bdevs_list": [ 00:34:21.634 { 00:34:21.634 "name": "BaseBdev1", 00:34:21.634 "uuid": "af37d783-2ab4-497d-a329-1fcf0617dbcc", 00:34:21.634 "is_configured": true, 00:34:21.634 "data_offset": 2048, 00:34:21.634 "data_size": 63488 00:34:21.634 }, 00:34:21.634 { 00:34:21.634 "name": "BaseBdev2", 00:34:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.634 "is_configured": false, 00:34:21.634 "data_offset": 0, 00:34:21.634 "data_size": 0 00:34:21.634 }, 00:34:21.634 { 00:34:21.634 "name": "BaseBdev3", 00:34:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.634 "is_configured": false, 00:34:21.634 "data_offset": 0, 00:34:21.634 "data_size": 0 00:34:21.634 } 00:34:21.634 ] 00:34:21.634 }' 00:34:21.634 02:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:21.634 02:04:21 -- common/autotest_common.sh@10 -- # set +x 00:34:22.213 02:04:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:22.475 [2024-04-24 02:04:22.377737] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.475 BaseBdev2 00:34:22.475 02:04:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:22.475 02:04:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:34:22.475 02:04:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:22.475 02:04:22 -- common/autotest_common.sh@887 -- # local i 00:34:22.475 02:04:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:22.475 02:04:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:22.475 02:04:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:22.732 02:04:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:22.991 [ 00:34:22.991 { 00:34:22.991 "name": "BaseBdev2", 00:34:22.991 "aliases": [ 00:34:22.991 "e351d1e6-60d6-4eb4-9b0c-a5923028b441" 00:34:22.991 ], 00:34:22.991 "product_name": "Malloc disk", 00:34:22.991 "block_size": 512, 00:34:22.991 "num_blocks": 65536, 00:34:22.991 "uuid": "e351d1e6-60d6-4eb4-9b0c-a5923028b441", 00:34:22.991 "assigned_rate_limits": { 00:34:22.991 "rw_ios_per_sec": 0, 00:34:22.991 "rw_mbytes_per_sec": 0, 00:34:22.991 "r_mbytes_per_sec": 0, 00:34:22.991 "w_mbytes_per_sec": 0 00:34:22.991 }, 00:34:22.991 "claimed": true, 00:34:22.991 "claim_type": "exclusive_write", 00:34:22.991 "zoned": false, 00:34:22.991 "supported_io_types": { 00:34:22.991 "read": true, 00:34:22.991 "write": true, 00:34:22.991 "unmap": true, 00:34:22.991 "write_zeroes": true, 00:34:22.991 "flush": true, 00:34:22.991 "reset": true, 00:34:22.991 "compare": false, 00:34:22.991 "compare_and_write": false, 00:34:22.991 "abort": true, 00:34:22.991 "nvme_admin": false, 00:34:22.991 "nvme_io": false 00:34:22.991 }, 00:34:22.991 "memory_domains": [ 00:34:22.991 { 00:34:22.991 "dma_device_id": "system", 00:34:22.991 "dma_device_type": 1 00:34:22.991 }, 00:34:22.991 { 00:34:22.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.991 "dma_device_type": 2 00:34:22.991 } 00:34:22.991 ], 00:34:22.991 "driver_specific": {} 00:34:22.991 } 00:34:22.991 ] 00:34:22.991 02:04:22 -- common/autotest_common.sh@893 -- # return 0 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.991 02:04:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.250 02:04:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:23.250 "name": "Existed_Raid", 00:34:23.250 "uuid": "6ef9246d-a615-4e07-aeb9-58ee5dda2613", 00:34:23.250 "strip_size_kb": 64, 00:34:23.250 "state": "configuring", 00:34:23.250 "raid_level": "raid5f", 00:34:23.250 "superblock": true, 00:34:23.250 "num_base_bdevs": 3, 00:34:23.250 "num_base_bdevs_discovered": 2, 00:34:23.250 "num_base_bdevs_operational": 3, 00:34:23.250 "base_bdevs_list": [ 00:34:23.250 { 00:34:23.250 "name": "BaseBdev1", 00:34:23.250 "uuid": "af37d783-2ab4-497d-a329-1fcf0617dbcc", 00:34:23.250 "is_configured": true, 00:34:23.250 "data_offset": 2048, 00:34:23.250 "data_size": 63488 00:34:23.250 }, 00:34:23.250 { 00:34:23.250 "name": "BaseBdev2", 00:34:23.250 "uuid": "e351d1e6-60d6-4eb4-9b0c-a5923028b441", 00:34:23.250 "is_configured": true, 00:34:23.250 "data_offset": 2048, 00:34:23.250 "data_size": 63488 00:34:23.250 }, 00:34:23.250 { 00:34:23.250 "name": "BaseBdev3", 00:34:23.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.250 "is_configured": false, 00:34:23.250 "data_offset": 0, 00:34:23.250 "data_size": 0 00:34:23.250 } 00:34:23.250 ] 00:34:23.250 }' 00:34:23.250 02:04:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:23.250 02:04:23 -- common/autotest_common.sh@10 -- # set +x 00:34:24.184 02:04:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:24.184 [2024-04-24 02:04:24.212432] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:24.184 [2024-04-24 02:04:24.212898] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:34:24.184 [2024-04-24 02:04:24.213023] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:24.184 [2024-04-24 02:04:24.213213] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:34:24.184 BaseBdev3 00:34:24.184 [2024-04-24 02:04:24.220570] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:34:24.184 [2024-04-24 02:04:24.220706] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:34:24.184 [2024-04-24 02:04:24.221066] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.184 02:04:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:34:24.184 02:04:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:34:24.184 02:04:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:24.184 02:04:24 -- common/autotest_common.sh@887 -- # local i 00:34:24.184 02:04:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:24.184 02:04:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:24.184 02:04:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:24.442 02:04:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:24.701 [ 00:34:24.701 { 00:34:24.701 "name": "BaseBdev3", 00:34:24.701 "aliases": [ 00:34:24.701 "ebf59ad7-3fdf-4d80-9d45-8217b9bf11bc" 00:34:24.701 ], 00:34:24.701 "product_name": "Malloc disk", 00:34:24.701 "block_size": 512, 00:34:24.701 "num_blocks": 65536, 00:34:24.701 "uuid": "ebf59ad7-3fdf-4d80-9d45-8217b9bf11bc", 00:34:24.701 "assigned_rate_limits": { 00:34:24.701 "rw_ios_per_sec": 0, 00:34:24.701 "rw_mbytes_per_sec": 0, 00:34:24.701 "r_mbytes_per_sec": 0, 00:34:24.701 "w_mbytes_per_sec": 0 00:34:24.701 }, 00:34:24.701 "claimed": true, 00:34:24.701 "claim_type": "exclusive_write", 00:34:24.701 "zoned": false, 00:34:24.701 "supported_io_types": { 00:34:24.701 "read": true, 00:34:24.701 "write": true, 00:34:24.701 "unmap": true, 00:34:24.701 "write_zeroes": true, 00:34:24.701 "flush": true, 00:34:24.701 "reset": true, 00:34:24.701 "compare": false, 00:34:24.701 "compare_and_write": false, 00:34:24.701 "abort": true, 00:34:24.701 "nvme_admin": false, 00:34:24.701 "nvme_io": false 00:34:24.701 }, 00:34:24.701 "memory_domains": [ 00:34:24.701 { 00:34:24.701 "dma_device_id": "system", 00:34:24.701 "dma_device_type": 1 00:34:24.701 }, 00:34:24.701 { 00:34:24.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.701 "dma_device_type": 2 00:34:24.701 } 00:34:24.701 ], 00:34:24.701 "driver_specific": {} 00:34:24.701 } 00:34:24.701 ] 00:34:24.701 02:04:24 -- common/autotest_common.sh@893 -- # return 0 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.701 02:04:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:24.960 02:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:24.960 "name": "Existed_Raid", 00:34:24.960 "uuid": "6ef9246d-a615-4e07-aeb9-58ee5dda2613", 00:34:24.960 "strip_size_kb": 64, 00:34:24.960 "state": "online", 00:34:24.960 "raid_level": "raid5f", 00:34:24.960 "superblock": true, 00:34:24.960 "num_base_bdevs": 3, 00:34:24.960 "num_base_bdevs_discovered": 3, 00:34:24.960 "num_base_bdevs_operational": 3, 00:34:24.960 "base_bdevs_list": [ 00:34:24.960 { 00:34:24.960 "name": "BaseBdev1", 00:34:24.960 "uuid": "af37d783-2ab4-497d-a329-1fcf0617dbcc", 00:34:24.960 "is_configured": true, 00:34:24.960 "data_offset": 2048, 00:34:24.960 "data_size": 63488 00:34:24.960 }, 00:34:24.960 { 00:34:24.960 "name": "BaseBdev2", 00:34:24.960 "uuid": "e351d1e6-60d6-4eb4-9b0c-a5923028b441", 00:34:24.960 "is_configured": true, 00:34:24.960 "data_offset": 2048, 00:34:24.960 "data_size": 63488 00:34:24.960 }, 00:34:24.960 { 00:34:24.960 "name": "BaseBdev3", 00:34:24.960 "uuid": "ebf59ad7-3fdf-4d80-9d45-8217b9bf11bc", 00:34:24.960 "is_configured": true, 00:34:24.960 "data_offset": 2048, 00:34:24.960 "data_size": 63488 00:34:24.960 } 00:34:24.960 ] 00:34:24.960 }' 00:34:24.960 02:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:24.961 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:34:25.527 02:04:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:25.785 [2024-04-24 02:04:25.709265] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:25.785 02:04:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.042 02:04:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:26.042 "name": "Existed_Raid", 00:34:26.042 "uuid": "6ef9246d-a615-4e07-aeb9-58ee5dda2613", 00:34:26.042 "strip_size_kb": 64, 00:34:26.042 "state": "online", 00:34:26.042 "raid_level": "raid5f", 00:34:26.042 "superblock": true, 00:34:26.042 "num_base_bdevs": 3, 00:34:26.042 "num_base_bdevs_discovered": 2, 00:34:26.042 "num_base_bdevs_operational": 2, 00:34:26.042 "base_bdevs_list": [ 00:34:26.042 { 00:34:26.042 "name": null, 00:34:26.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.042 "is_configured": false, 00:34:26.042 "data_offset": 2048, 00:34:26.042 "data_size": 63488 00:34:26.042 }, 00:34:26.042 { 00:34:26.042 "name": "BaseBdev2", 00:34:26.042 "uuid": "e351d1e6-60d6-4eb4-9b0c-a5923028b441", 00:34:26.042 "is_configured": true, 00:34:26.043 "data_offset": 2048, 00:34:26.043 "data_size": 63488 00:34:26.043 }, 00:34:26.043 { 00:34:26.043 "name": "BaseBdev3", 00:34:26.043 "uuid": "ebf59ad7-3fdf-4d80-9d45-8217b9bf11bc", 00:34:26.043 "is_configured": true, 00:34:26.043 "data_offset": 2048, 00:34:26.043 "data_size": 63488 00:34:26.043 } 00:34:26.043 ] 00:34:26.043 }' 00:34:26.043 02:04:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:26.043 02:04:26 -- common/autotest_common.sh@10 -- # set +x 00:34:26.977 02:04:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:26.977 02:04:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:26.978 02:04:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.978 02:04:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:26.978 02:04:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:26.978 02:04:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:26.978 02:04:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:27.237 [2024-04-24 02:04:27.303866] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:27.237 [2024-04-24 02:04:27.304207] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:27.495 [2024-04-24 02:04:27.414311] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:27.495 02:04:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:27.495 02:04:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:27.495 02:04:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.495 02:04:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:27.753 02:04:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:27.753 02:04:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:27.753 02:04:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:28.011 [2024-04-24 02:04:27.954585] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:28.011 [2024-04-24 02:04:27.954828] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:34:28.011 02:04:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:28.011 02:04:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:28.011 02:04:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.011 02:04:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:28.269 02:04:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:28.269 02:04:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:28.269 02:04:28 -- bdev/bdev_raid.sh@287 -- # killprocess 136311 00:34:28.269 02:04:28 -- common/autotest_common.sh@936 -- # '[' -z 136311 ']' 00:34:28.269 02:04:28 -- common/autotest_common.sh@940 -- # kill -0 136311 00:34:28.269 02:04:28 -- common/autotest_common.sh@941 -- # uname 00:34:28.269 02:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:28.269 02:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136311 00:34:28.269 02:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:28.269 02:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:28.269 02:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136311' 00:34:28.269 killing process with pid 136311 00:34:28.269 02:04:28 -- common/autotest_common.sh@955 -- # kill 136311 00:34:28.269 02:04:28 -- common/autotest_common.sh@960 -- # wait 136311 00:34:28.269 [2024-04-24 02:04:28.312949] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:28.269 [2024-04-24 02:04:28.313148] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:30.172 00:34:30.172 real 0m14.828s 00:34:30.172 user 0m25.316s 00:34:30.172 sys 0m2.119s 00:34:30.172 02:04:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:30.172 02:04:29 -- common/autotest_common.sh@10 -- # set +x 00:34:30.172 ************************************ 00:34:30.172 END TEST raid5f_state_function_test_sb 00:34:30.172 ************************************ 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:34:30.172 02:04:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:34:30.172 02:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:30.172 02:04:29 -- common/autotest_common.sh@10 -- # set +x 00:34:30.172 ************************************ 00:34:30.172 START TEST raid5f_superblock_test 00:34:30.172 ************************************ 00:34:30.172 02:04:29 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 3 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@357 -- # raid_pid=136726 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@358 -- # waitforlisten 136726 /var/tmp/spdk-raid.sock 00:34:30.172 02:04:29 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:30.172 02:04:29 -- common/autotest_common.sh@817 -- # '[' -z 136726 ']' 00:34:30.172 02:04:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:30.172 02:04:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:30.172 02:04:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:30.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:30.172 02:04:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:30.173 02:04:29 -- common/autotest_common.sh@10 -- # set +x 00:34:30.173 [2024-04-24 02:04:29.936531] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:34:30.173 [2024-04-24 02:04:29.936943] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136726 ] 00:34:30.173 [2024-04-24 02:04:30.112354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.431 [2024-04-24 02:04:30.396604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.689 [2024-04-24 02:04:30.664885] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:30.948 02:04:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:30.948 02:04:30 -- common/autotest_common.sh@850 -- # return 0 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:30.948 02:04:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:31.232 malloc1 00:34:31.232 02:04:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:31.490 [2024-04-24 02:04:31.502593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:31.490 [2024-04-24 02:04:31.502842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:31.490 [2024-04-24 02:04:31.503043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:34:31.490 [2024-04-24 02:04:31.503208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:31.490 [2024-04-24 02:04:31.506009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:31.490 [2024-04-24 02:04:31.506193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:31.490 pt1 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:31.490 02:04:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:32.057 malloc2 00:34:32.057 02:04:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:32.315 [2024-04-24 02:04:32.144498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:32.315 [2024-04-24 02:04:32.144796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:32.315 [2024-04-24 02:04:32.144935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:32.315 [2024-04-24 02:04:32.145067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:32.315 [2024-04-24 02:04:32.147744] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:32.315 [2024-04-24 02:04:32.147926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:32.315 pt2 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:32.315 02:04:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:32.572 malloc3 00:34:32.572 02:04:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:32.830 [2024-04-24 02:04:32.685743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:32.830 [2024-04-24 02:04:32.686000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:32.830 [2024-04-24 02:04:32.686090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:32.830 [2024-04-24 02:04:32.686305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:32.830 [2024-04-24 02:04:32.689006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:32.830 [2024-04-24 02:04:32.689220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:32.830 pt3 00:34:32.830 02:04:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:32.830 02:04:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:32.830 02:04:32 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:34:33.089 [2024-04-24 02:04:32.986037] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:33.089 [2024-04-24 02:04:32.988309] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:33.089 [2024-04-24 02:04:32.988531] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:33.089 [2024-04-24 02:04:32.988770] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:34:33.089 [2024-04-24 02:04:32.988874] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:33.089 [2024-04-24 02:04:32.989058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:33.089 [2024-04-24 02:04:32.995625] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:34:33.089 [2024-04-24 02:04:32.995758] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:34:33.089 [2024-04-24 02:04:32.996040] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.089 02:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.347 02:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:33.347 "name": "raid_bdev1", 00:34:33.347 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:33.347 "strip_size_kb": 64, 00:34:33.347 "state": "online", 00:34:33.347 "raid_level": "raid5f", 00:34:33.347 "superblock": true, 00:34:33.347 "num_base_bdevs": 3, 00:34:33.347 "num_base_bdevs_discovered": 3, 00:34:33.347 "num_base_bdevs_operational": 3, 00:34:33.347 "base_bdevs_list": [ 00:34:33.347 { 00:34:33.347 "name": "pt1", 00:34:33.347 "uuid": "a90456e5-93c1-58f1-a650-c4544611635e", 00:34:33.347 "is_configured": true, 00:34:33.347 "data_offset": 2048, 00:34:33.347 "data_size": 63488 00:34:33.347 }, 00:34:33.347 { 00:34:33.347 "name": "pt2", 00:34:33.347 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:33.347 "is_configured": true, 00:34:33.347 "data_offset": 2048, 00:34:33.347 "data_size": 63488 00:34:33.347 }, 00:34:33.347 { 00:34:33.347 "name": "pt3", 00:34:33.347 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:33.347 "is_configured": true, 00:34:33.347 "data_offset": 2048, 00:34:33.347 "data_size": 63488 00:34:33.347 } 00:34:33.347 ] 00:34:33.347 }' 00:34:33.347 02:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:33.347 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:34:33.914 02:04:33 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:33.914 02:04:33 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:34:34.172 [2024-04-24 02:04:34.083375] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:34.172 02:04:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d441d446-9238-4079-9479-db625bfcbcf7 00:34:34.172 02:04:34 -- bdev/bdev_raid.sh@380 -- # '[' -z d441d446-9238-4079-9479-db625bfcbcf7 ']' 00:34:34.172 02:04:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:34.431 [2024-04-24 02:04:34.355256] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:34.431 [2024-04-24 02:04:34.355444] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:34.431 [2024-04-24 02:04:34.355655] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:34.431 [2024-04-24 02:04:34.355834] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:34.431 [2024-04-24 02:04:34.355924] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:34:34.431 02:04:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.431 02:04:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:34:34.689 02:04:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:34:34.689 02:04:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:34:34.689 02:04:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:34.689 02:04:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:34.948 02:04:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:34.948 02:04:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:35.207 02:04:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:35.207 02:04:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:35.465 02:04:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:35.465 02:04:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:35.723 02:04:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:34:35.723 02:04:35 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:35.723 02:04:35 -- common/autotest_common.sh@638 -- # local es=0 00:34:35.723 02:04:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:35.723 02:04:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.723 02:04:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:35.723 02:04:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.723 02:04:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:35.723 02:04:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.723 02:04:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:35.723 02:04:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.723 02:04:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:35.723 02:04:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:35.981 [2024-04-24 02:04:35.879560] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:35.981 [2024-04-24 02:04:35.882045] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:35.981 [2024-04-24 02:04:35.882249] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:35.981 [2024-04-24 02:04:35.882342] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:34:35.981 [2024-04-24 02:04:35.882630] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:34:35.981 [2024-04-24 02:04:35.882770] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:34:35.981 [2024-04-24 02:04:35.882852] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:35.981 [2024-04-24 02:04:35.882952] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:34:35.981 request: 00:34:35.981 { 00:34:35.981 "name": "raid_bdev1", 00:34:35.981 "raid_level": "raid5f", 00:34:35.981 "base_bdevs": [ 00:34:35.981 "malloc1", 00:34:35.981 "malloc2", 00:34:35.981 "malloc3" 00:34:35.981 ], 00:34:35.981 "superblock": false, 00:34:35.981 "strip_size_kb": 64, 00:34:35.981 "method": "bdev_raid_create", 00:34:35.981 "req_id": 1 00:34:35.981 } 00:34:35.981 Got JSON-RPC error response 00:34:35.981 response: 00:34:35.981 { 00:34:35.981 "code": -17, 00:34:35.981 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:35.981 } 00:34:35.981 02:04:35 -- common/autotest_common.sh@641 -- # es=1 00:34:35.981 02:04:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:35.981 02:04:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:35.981 02:04:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:35.981 02:04:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.981 02:04:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:34:36.239 02:04:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:34:36.239 02:04:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:34:36.239 02:04:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:36.498 [2024-04-24 02:04:36.347687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:36.498 [2024-04-24 02:04:36.347937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.498 [2024-04-24 02:04:36.348008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:34:36.498 [2024-04-24 02:04:36.348160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.498 [2024-04-24 02:04:36.350672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.498 [2024-04-24 02:04:36.350843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:36.498 [2024-04-24 02:04:36.351147] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:36.498 [2024-04-24 02:04:36.351285] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:36.498 pt1 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.498 02:04:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.755 02:04:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:36.755 "name": "raid_bdev1", 00:34:36.755 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:36.755 "strip_size_kb": 64, 00:34:36.755 "state": "configuring", 00:34:36.755 "raid_level": "raid5f", 00:34:36.755 "superblock": true, 00:34:36.755 "num_base_bdevs": 3, 00:34:36.755 "num_base_bdevs_discovered": 1, 00:34:36.756 "num_base_bdevs_operational": 3, 00:34:36.756 "base_bdevs_list": [ 00:34:36.756 { 00:34:36.756 "name": "pt1", 00:34:36.756 "uuid": "a90456e5-93c1-58f1-a650-c4544611635e", 00:34:36.756 "is_configured": true, 00:34:36.756 "data_offset": 2048, 00:34:36.756 "data_size": 63488 00:34:36.756 }, 00:34:36.756 { 00:34:36.756 "name": null, 00:34:36.756 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:36.756 "is_configured": false, 00:34:36.756 "data_offset": 2048, 00:34:36.756 "data_size": 63488 00:34:36.756 }, 00:34:36.756 { 00:34:36.756 "name": null, 00:34:36.756 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:36.756 "is_configured": false, 00:34:36.756 "data_offset": 2048, 00:34:36.756 "data_size": 63488 00:34:36.756 } 00:34:36.756 ] 00:34:36.756 }' 00:34:36.756 02:04:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:36.756 02:04:36 -- common/autotest_common.sh@10 -- # set +x 00:34:37.321 02:04:37 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:34:37.321 02:04:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:37.579 [2024-04-24 02:04:37.584007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:37.579 [2024-04-24 02:04:37.584335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.579 [2024-04-24 02:04:37.584502] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:37.579 [2024-04-24 02:04:37.584603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.579 [2024-04-24 02:04:37.585164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.579 [2024-04-24 02:04:37.585326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:37.579 [2024-04-24 02:04:37.585579] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:37.579 [2024-04-24 02:04:37.585696] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:37.579 pt2 00:34:37.579 02:04:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:37.837 [2024-04-24 02:04:37.868103] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.837 02:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.403 02:04:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:38.403 "name": "raid_bdev1", 00:34:38.403 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:38.403 "strip_size_kb": 64, 00:34:38.403 "state": "configuring", 00:34:38.403 "raid_level": "raid5f", 00:34:38.403 "superblock": true, 00:34:38.403 "num_base_bdevs": 3, 00:34:38.403 "num_base_bdevs_discovered": 1, 00:34:38.403 "num_base_bdevs_operational": 3, 00:34:38.403 "base_bdevs_list": [ 00:34:38.403 { 00:34:38.403 "name": "pt1", 00:34:38.403 "uuid": "a90456e5-93c1-58f1-a650-c4544611635e", 00:34:38.403 "is_configured": true, 00:34:38.403 "data_offset": 2048, 00:34:38.403 "data_size": 63488 00:34:38.403 }, 00:34:38.403 { 00:34:38.403 "name": null, 00:34:38.403 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:38.403 "is_configured": false, 00:34:38.403 "data_offset": 2048, 00:34:38.403 "data_size": 63488 00:34:38.403 }, 00:34:38.403 { 00:34:38.403 "name": null, 00:34:38.403 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:38.403 "is_configured": false, 00:34:38.403 "data_offset": 2048, 00:34:38.403 "data_size": 63488 00:34:38.403 } 00:34:38.403 ] 00:34:38.403 }' 00:34:38.403 02:04:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:38.403 02:04:38 -- common/autotest_common.sh@10 -- # set +x 00:34:38.969 02:04:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:34:38.969 02:04:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:38.969 02:04:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:39.226 [2024-04-24 02:04:39.120505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:39.226 [2024-04-24 02:04:39.120746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.226 [2024-04-24 02:04:39.120871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:39.226 [2024-04-24 02:04:39.120984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.226 [2024-04-24 02:04:39.121501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.226 [2024-04-24 02:04:39.121662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:39.226 [2024-04-24 02:04:39.121907] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:39.226 [2024-04-24 02:04:39.122027] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:39.226 pt2 00:34:39.226 02:04:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:39.226 02:04:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:39.226 02:04:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:39.483 [2024-04-24 02:04:39.420568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:39.483 [2024-04-24 02:04:39.420885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.483 [2024-04-24 02:04:39.421035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:39.483 [2024-04-24 02:04:39.421161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.483 [2024-04-24 02:04:39.421760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.483 [2024-04-24 02:04:39.421961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:39.483 [2024-04-24 02:04:39.422247] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:39.483 [2024-04-24 02:04:39.422380] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:39.483 [2024-04-24 02:04:39.422636] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:34:39.483 [2024-04-24 02:04:39.422754] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:39.483 [2024-04-24 02:04:39.422935] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:39.483 [2024-04-24 02:04:39.429367] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:34:39.483 [2024-04-24 02:04:39.429521] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:34:39.483 [2024-04-24 02:04:39.429848] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:39.483 pt3 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.484 02:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.740 02:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:39.740 "name": "raid_bdev1", 00:34:39.740 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:39.740 "strip_size_kb": 64, 00:34:39.740 "state": "online", 00:34:39.740 "raid_level": "raid5f", 00:34:39.740 "superblock": true, 00:34:39.740 "num_base_bdevs": 3, 00:34:39.740 "num_base_bdevs_discovered": 3, 00:34:39.740 "num_base_bdevs_operational": 3, 00:34:39.740 "base_bdevs_list": [ 00:34:39.740 { 00:34:39.740 "name": "pt1", 00:34:39.740 "uuid": "a90456e5-93c1-58f1-a650-c4544611635e", 00:34:39.740 "is_configured": true, 00:34:39.740 "data_offset": 2048, 00:34:39.740 "data_size": 63488 00:34:39.740 }, 00:34:39.740 { 00:34:39.740 "name": "pt2", 00:34:39.740 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:39.740 "is_configured": true, 00:34:39.740 "data_offset": 2048, 00:34:39.740 "data_size": 63488 00:34:39.740 }, 00:34:39.740 { 00:34:39.740 "name": "pt3", 00:34:39.740 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:39.740 "is_configured": true, 00:34:39.740 "data_offset": 2048, 00:34:39.740 "data_size": 63488 00:34:39.740 } 00:34:39.740 ] 00:34:39.740 }' 00:34:39.740 02:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:39.740 02:04:39 -- common/autotest_common.sh@10 -- # set +x 00:34:40.673 02:04:40 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:40.673 02:04:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:34:40.673 [2024-04-24 02:04:40.753104] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:40.930 02:04:40 -- bdev/bdev_raid.sh@430 -- # '[' d441d446-9238-4079-9479-db625bfcbcf7 '!=' d441d446-9238-4079-9479-db625bfcbcf7 ']' 00:34:40.930 02:04:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:34:40.930 02:04:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:40.930 02:04:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:34:40.930 02:04:40 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:41.188 [2024-04-24 02:04:41.121047] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.188 02:04:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.446 02:04:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:41.446 "name": "raid_bdev1", 00:34:41.446 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:41.446 "strip_size_kb": 64, 00:34:41.446 "state": "online", 00:34:41.446 "raid_level": "raid5f", 00:34:41.446 "superblock": true, 00:34:41.446 "num_base_bdevs": 3, 00:34:41.446 "num_base_bdevs_discovered": 2, 00:34:41.446 "num_base_bdevs_operational": 2, 00:34:41.446 "base_bdevs_list": [ 00:34:41.446 { 00:34:41.446 "name": null, 00:34:41.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.446 "is_configured": false, 00:34:41.446 "data_offset": 2048, 00:34:41.446 "data_size": 63488 00:34:41.446 }, 00:34:41.446 { 00:34:41.446 "name": "pt2", 00:34:41.446 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:41.446 "is_configured": true, 00:34:41.446 "data_offset": 2048, 00:34:41.446 "data_size": 63488 00:34:41.446 }, 00:34:41.446 { 00:34:41.446 "name": "pt3", 00:34:41.446 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:41.446 "is_configured": true, 00:34:41.446 "data_offset": 2048, 00:34:41.446 "data_size": 63488 00:34:41.446 } 00:34:41.446 ] 00:34:41.446 }' 00:34:41.446 02:04:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:41.446 02:04:41 -- common/autotest_common.sh@10 -- # set +x 00:34:42.379 02:04:42 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:42.380 [2024-04-24 02:04:42.429271] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:42.380 [2024-04-24 02:04:42.429520] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:42.380 [2024-04-24 02:04:42.429753] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:42.380 [2024-04-24 02:04:42.429962] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:42.380 [2024-04-24 02:04:42.430076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:34:42.380 02:04:42 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:34:42.380 02:04:42 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.945 02:04:42 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:34:42.945 02:04:42 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:34:42.945 02:04:42 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:34:42.945 02:04:42 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:34:42.945 02:04:42 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:43.203 02:04:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:34:43.203 02:04:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:34:43.203 02:04:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:43.461 02:04:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:34:43.461 02:04:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:34:43.461 02:04:43 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:34:43.461 02:04:43 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:34:43.461 02:04:43 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:43.719 [2024-04-24 02:04:43.637454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:43.719 [2024-04-24 02:04:43.637735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:43.719 [2024-04-24 02:04:43.637816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:43.719 [2024-04-24 02:04:43.637930] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:43.719 [2024-04-24 02:04:43.640517] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:43.719 [2024-04-24 02:04:43.640689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:43.719 [2024-04-24 02:04:43.640928] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:43.719 [2024-04-24 02:04:43.641088] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:43.719 pt2 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.719 02:04:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.977 02:04:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:43.977 "name": "raid_bdev1", 00:34:43.977 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:43.977 "strip_size_kb": 64, 00:34:43.977 "state": "configuring", 00:34:43.977 "raid_level": "raid5f", 00:34:43.977 "superblock": true, 00:34:43.977 "num_base_bdevs": 3, 00:34:43.977 "num_base_bdevs_discovered": 1, 00:34:43.977 "num_base_bdevs_operational": 2, 00:34:43.977 "base_bdevs_list": [ 00:34:43.977 { 00:34:43.977 "name": null, 00:34:43.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.977 "is_configured": false, 00:34:43.977 "data_offset": 2048, 00:34:43.977 "data_size": 63488 00:34:43.977 }, 00:34:43.977 { 00:34:43.977 "name": "pt2", 00:34:43.977 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:43.977 "is_configured": true, 00:34:43.977 "data_offset": 2048, 00:34:43.978 "data_size": 63488 00:34:43.978 }, 00:34:43.978 { 00:34:43.978 "name": null, 00:34:43.978 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:43.978 "is_configured": false, 00:34:43.978 "data_offset": 2048, 00:34:43.978 "data_size": 63488 00:34:43.978 } 00:34:43.978 ] 00:34:43.978 }' 00:34:43.978 02:04:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:43.978 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:34:44.544 02:04:44 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:34:44.544 02:04:44 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:34:44.544 02:04:44 -- bdev/bdev_raid.sh@462 -- # i=2 00:34:44.544 02:04:44 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:44.803 [2024-04-24 02:04:44.753724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:44.803 [2024-04-24 02:04:44.754015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:44.803 [2024-04-24 02:04:44.754161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:44.803 [2024-04-24 02:04:44.754266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:44.803 [2024-04-24 02:04:44.754881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:44.803 [2024-04-24 02:04:44.755038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:44.803 [2024-04-24 02:04:44.755277] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:44.803 [2024-04-24 02:04:44.755404] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:44.803 [2024-04-24 02:04:44.755634] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:34:44.803 [2024-04-24 02:04:44.755740] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:44.803 [2024-04-24 02:04:44.755873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:44.803 [2024-04-24 02:04:44.762737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:34:44.803 [2024-04-24 02:04:44.762871] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:34:44.803 [2024-04-24 02:04:44.763286] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:44.803 pt3 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.803 02:04:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.061 02:04:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:45.061 "name": "raid_bdev1", 00:34:45.061 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:45.061 "strip_size_kb": 64, 00:34:45.061 "state": "online", 00:34:45.061 "raid_level": "raid5f", 00:34:45.061 "superblock": true, 00:34:45.061 "num_base_bdevs": 3, 00:34:45.061 "num_base_bdevs_discovered": 2, 00:34:45.061 "num_base_bdevs_operational": 2, 00:34:45.061 "base_bdevs_list": [ 00:34:45.061 { 00:34:45.061 "name": null, 00:34:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.061 "is_configured": false, 00:34:45.061 "data_offset": 2048, 00:34:45.061 "data_size": 63488 00:34:45.061 }, 00:34:45.061 { 00:34:45.061 "name": "pt2", 00:34:45.061 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:45.061 "is_configured": true, 00:34:45.062 "data_offset": 2048, 00:34:45.062 "data_size": 63488 00:34:45.062 }, 00:34:45.062 { 00:34:45.062 "name": "pt3", 00:34:45.062 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:45.062 "is_configured": true, 00:34:45.062 "data_offset": 2048, 00:34:45.062 "data_size": 63488 00:34:45.062 } 00:34:45.062 ] 00:34:45.062 }' 00:34:45.062 02:04:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:45.062 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:34:45.627 02:04:45 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:34:45.627 02:04:45 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:45.885 [2024-04-24 02:04:45.932686] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:45.885 [2024-04-24 02:04:45.932756] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:45.885 [2024-04-24 02:04:45.932885] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:45.885 [2024-04-24 02:04:45.933004] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:45.885 [2024-04-24 02:04:45.933027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:34:45.885 02:04:45 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:34:45.885 02:04:45 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.451 02:04:46 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:34:46.451 02:04:46 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:34:46.451 02:04:46 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:46.451 [2024-04-24 02:04:46.532740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:46.451 [2024-04-24 02:04:46.532836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:46.451 [2024-04-24 02:04:46.532879] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:46.451 [2024-04-24 02:04:46.532913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:46.451 [2024-04-24 02:04:46.535735] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:46.451 [2024-04-24 02:04:46.535796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:46.451 [2024-04-24 02:04:46.535963] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:46.451 [2024-04-24 02:04:46.536022] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:46.709 pt1 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:46.709 02:04:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.971 02:04:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:46.971 "name": "raid_bdev1", 00:34:46.971 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:46.971 "strip_size_kb": 64, 00:34:46.971 "state": "configuring", 00:34:46.971 "raid_level": "raid5f", 00:34:46.971 "superblock": true, 00:34:46.971 "num_base_bdevs": 3, 00:34:46.971 "num_base_bdevs_discovered": 1, 00:34:46.971 "num_base_bdevs_operational": 3, 00:34:46.971 "base_bdevs_list": [ 00:34:46.971 { 00:34:46.971 "name": "pt1", 00:34:46.971 "uuid": "a90456e5-93c1-58f1-a650-c4544611635e", 00:34:46.971 "is_configured": true, 00:34:46.971 "data_offset": 2048, 00:34:46.971 "data_size": 63488 00:34:46.971 }, 00:34:46.971 { 00:34:46.971 "name": null, 00:34:46.971 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:46.971 "is_configured": false, 00:34:46.971 "data_offset": 2048, 00:34:46.971 "data_size": 63488 00:34:46.971 }, 00:34:46.971 { 00:34:46.971 "name": null, 00:34:46.971 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:46.971 "is_configured": false, 00:34:46.971 "data_offset": 2048, 00:34:46.971 "data_size": 63488 00:34:46.971 } 00:34:46.971 ] 00:34:46.971 }' 00:34:46.971 02:04:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:46.971 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:34:47.545 02:04:47 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:34:47.545 02:04:47 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:47.545 02:04:47 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:47.803 02:04:47 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:34:47.803 02:04:47 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:47.803 02:04:47 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:48.061 02:04:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:34:48.061 02:04:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:48.061 02:04:48 -- bdev/bdev_raid.sh@489 -- # i=2 00:34:48.061 02:04:48 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:48.319 [2024-04-24 02:04:48.349202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:48.319 [2024-04-24 02:04:48.349300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.319 [2024-04-24 02:04:48.349347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:48.319 [2024-04-24 02:04:48.349381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.319 [2024-04-24 02:04:48.349954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.319 [2024-04-24 02:04:48.350010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:48.319 [2024-04-24 02:04:48.350173] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:48.319 [2024-04-24 02:04:48.350192] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:48.319 [2024-04-24 02:04:48.350201] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:48.319 [2024-04-24 02:04:48.350231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:34:48.319 [2024-04-24 02:04:48.350315] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:48.319 pt3 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.319 02:04:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.578 02:04:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:48.578 "name": "raid_bdev1", 00:34:48.578 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:48.578 "strip_size_kb": 64, 00:34:48.578 "state": "configuring", 00:34:48.578 "raid_level": "raid5f", 00:34:48.578 "superblock": true, 00:34:48.578 "num_base_bdevs": 3, 00:34:48.578 "num_base_bdevs_discovered": 1, 00:34:48.578 "num_base_bdevs_operational": 2, 00:34:48.578 "base_bdevs_list": [ 00:34:48.578 { 00:34:48.578 "name": null, 00:34:48.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.578 "is_configured": false, 00:34:48.578 "data_offset": 2048, 00:34:48.578 "data_size": 63488 00:34:48.578 }, 00:34:48.578 { 00:34:48.578 "name": null, 00:34:48.578 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:48.578 "is_configured": false, 00:34:48.578 "data_offset": 2048, 00:34:48.578 "data_size": 63488 00:34:48.578 }, 00:34:48.578 { 00:34:48.578 "name": "pt3", 00:34:48.578 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:48.578 "is_configured": true, 00:34:48.578 "data_offset": 2048, 00:34:48.578 "data_size": 63488 00:34:48.578 } 00:34:48.578 ] 00:34:48.578 }' 00:34:48.578 02:04:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:48.578 02:04:48 -- common/autotest_common.sh@10 -- # set +x 00:34:49.144 02:04:49 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:34:49.144 02:04:49 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:34:49.144 02:04:49 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.402 [2024-04-24 02:04:49.420614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.402 [2024-04-24 02:04:49.421165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.402 [2024-04-24 02:04:49.421329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:49.402 [2024-04-24 02:04:49.421461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.402 [2024-04-24 02:04:49.422163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.402 [2024-04-24 02:04:49.422338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.402 [2024-04-24 02:04:49.422582] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:49.402 [2024-04-24 02:04:49.422657] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.402 [2024-04-24 02:04:49.422802] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:34:49.402 [2024-04-24 02:04:49.422820] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:49.402 [2024-04-24 02:04:49.422940] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:34:49.402 [2024-04-24 02:04:49.431136] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:34:49.402 [2024-04-24 02:04:49.431167] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:34:49.402 [2024-04-24 02:04:49.431444] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.402 pt2 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.402 02:04:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.661 02:04:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:49.661 "name": "raid_bdev1", 00:34:49.661 "uuid": "d441d446-9238-4079-9479-db625bfcbcf7", 00:34:49.661 "strip_size_kb": 64, 00:34:49.661 "state": "online", 00:34:49.661 "raid_level": "raid5f", 00:34:49.661 "superblock": true, 00:34:49.661 "num_base_bdevs": 3, 00:34:49.661 "num_base_bdevs_discovered": 2, 00:34:49.661 "num_base_bdevs_operational": 2, 00:34:49.661 "base_bdevs_list": [ 00:34:49.661 { 00:34:49.661 "name": null, 00:34:49.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:49.661 "is_configured": false, 00:34:49.661 "data_offset": 2048, 00:34:49.661 "data_size": 63488 00:34:49.661 }, 00:34:49.661 { 00:34:49.661 "name": "pt2", 00:34:49.661 "uuid": "a074eea9-0cdc-514c-8cc4-6acea817728c", 00:34:49.661 "is_configured": true, 00:34:49.661 "data_offset": 2048, 00:34:49.661 "data_size": 63488 00:34:49.661 }, 00:34:49.661 { 00:34:49.661 "name": "pt3", 00:34:49.661 "uuid": "276e8ddb-a80e-530f-81b6-bdd3eaf4d047", 00:34:49.661 "is_configured": true, 00:34:49.661 "data_offset": 2048, 00:34:49.661 "data_size": 63488 00:34:49.661 } 00:34:49.661 ] 00:34:49.661 }' 00:34:49.662 02:04:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:49.662 02:04:49 -- common/autotest_common.sh@10 -- # set +x 00:34:50.594 02:04:50 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:50.594 02:04:50 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:34:50.594 [2024-04-24 02:04:50.624814] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.594 02:04:50 -- bdev/bdev_raid.sh@506 -- # '[' d441d446-9238-4079-9479-db625bfcbcf7 '!=' d441d446-9238-4079-9479-db625bfcbcf7 ']' 00:34:50.594 02:04:50 -- bdev/bdev_raid.sh@511 -- # killprocess 136726 00:34:50.594 02:04:50 -- common/autotest_common.sh@936 -- # '[' -z 136726 ']' 00:34:50.594 02:04:50 -- common/autotest_common.sh@940 -- # kill -0 136726 00:34:50.594 02:04:50 -- common/autotest_common.sh@941 -- # uname 00:34:50.594 02:04:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:50.594 02:04:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136726 00:34:50.594 killing process with pid 136726 00:34:50.594 02:04:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:50.594 02:04:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:50.594 02:04:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136726' 00:34:50.594 02:04:50 -- common/autotest_common.sh@955 -- # kill 136726 00:34:50.594 [2024-04-24 02:04:50.673582] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:50.594 02:04:50 -- common/autotest_common.sh@960 -- # wait 136726 00:34:50.594 [2024-04-24 02:04:50.673658] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:50.594 [2024-04-24 02:04:50.673723] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:50.594 [2024-04-24 02:04:50.673735] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:34:51.159 [2024-04-24 02:04:51.019722] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:34:52.532 00:34:52.532 real 0m22.629s 00:34:52.532 user 0m40.553s 00:34:52.532 sys 0m3.065s 00:34:52.532 02:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:52.532 ************************************ 00:34:52.532 END TEST raid5f_superblock_test 00:34:52.532 ************************************ 00:34:52.532 02:04:52 -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:34:52.532 02:04:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:34:52.532 02:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:52.532 02:04:52 -- common/autotest_common.sh@10 -- # set +x 00:34:52.532 ************************************ 00:34:52.532 START TEST raid5f_rebuild_test 00:34:52.532 ************************************ 00:34:52.532 02:04:52 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 false false 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=137369 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137369 /var/tmp/spdk-raid.sock 00:34:52.532 02:04:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:52.532 02:04:52 -- common/autotest_common.sh@817 -- # '[' -z 137369 ']' 00:34:52.532 02:04:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:52.532 02:04:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:52.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:52.533 02:04:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:52.533 02:04:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:52.533 02:04:52 -- common/autotest_common.sh@10 -- # set +x 00:34:52.790 [2024-04-24 02:04:52.650696] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:34:52.790 [2024-04-24 02:04:52.650828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137369 ] 00:34:52.790 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:52.790 Zero copy mechanism will not be used. 00:34:52.790 [2024-04-24 02:04:52.810774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.048 [2024-04-24 02:04:53.106081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.306 [2024-04-24 02:04:53.358355] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:53.563 02:04:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:53.563 02:04:53 -- common/autotest_common.sh@850 -- # return 0 00:34:53.563 02:04:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:34:53.563 02:04:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:34:53.563 02:04:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:54.131 BaseBdev1 00:34:54.131 02:04:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:34:54.131 02:04:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:34:54.131 02:04:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:54.131 BaseBdev2 00:34:54.131 02:04:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:34:54.131 02:04:54 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:34:54.131 02:04:54 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:54.697 BaseBdev3 00:34:54.697 02:04:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:54.956 spare_malloc 00:34:54.956 02:04:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:55.214 spare_delay 00:34:55.214 02:04:55 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:55.472 [2024-04-24 02:04:55.375789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:55.472 [2024-04-24 02:04:55.375918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:55.472 [2024-04-24 02:04:55.375956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:55.472 [2024-04-24 02:04:55.376002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:55.472 [2024-04-24 02:04:55.378692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:55.472 [2024-04-24 02:04:55.378753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:55.472 spare 00:34:55.472 02:04:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:34:55.730 [2024-04-24 02:04:55.659870] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:55.730 [2024-04-24 02:04:55.662112] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:55.730 [2024-04-24 02:04:55.662167] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:55.730 [2024-04-24 02:04:55.662274] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:34:55.730 [2024-04-24 02:04:55.662285] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:55.730 [2024-04-24 02:04:55.662452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:34:55.730 [2024-04-24 02:04:55.669595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:34:55.730 [2024-04-24 02:04:55.669623] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:34:55.730 [2024-04-24 02:04:55.669878] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.730 02:04:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.988 02:04:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:55.988 "name": "raid_bdev1", 00:34:55.988 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:34:55.988 "strip_size_kb": 64, 00:34:55.988 "state": "online", 00:34:55.988 "raid_level": "raid5f", 00:34:55.988 "superblock": false, 00:34:55.988 "num_base_bdevs": 3, 00:34:55.988 "num_base_bdevs_discovered": 3, 00:34:55.988 "num_base_bdevs_operational": 3, 00:34:55.988 "base_bdevs_list": [ 00:34:55.988 { 00:34:55.988 "name": "BaseBdev1", 00:34:55.988 "uuid": "ae5a548a-d997-4cb3-b01e-493f383583b0", 00:34:55.988 "is_configured": true, 00:34:55.988 "data_offset": 0, 00:34:55.988 "data_size": 65536 00:34:55.988 }, 00:34:55.988 { 00:34:55.988 "name": "BaseBdev2", 00:34:55.988 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:34:55.988 "is_configured": true, 00:34:55.988 "data_offset": 0, 00:34:55.988 "data_size": 65536 00:34:55.988 }, 00:34:55.988 { 00:34:55.988 "name": "BaseBdev3", 00:34:55.988 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:34:55.988 "is_configured": true, 00:34:55.988 "data_offset": 0, 00:34:55.988 "data_size": 65536 00:34:55.988 } 00:34:55.988 ] 00:34:55.988 }' 00:34:55.988 02:04:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:55.988 02:04:55 -- common/autotest_common.sh@10 -- # set +x 00:34:56.553 02:04:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:56.553 02:04:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:34:56.811 [2024-04-24 02:04:56.781518] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:56.811 02:04:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:34:56.811 02:04:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:56.811 02:04:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.166 02:04:57 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:34:57.166 02:04:57 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:34:57.166 02:04:57 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:34:57.166 02:04:57 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@12 -- # local i 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:57.166 02:04:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:57.426 [2024-04-24 02:04:57.413595] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:34:57.426 /dev/nbd0 00:34:57.426 02:04:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:57.426 02:04:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:57.426 02:04:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:34:57.426 02:04:57 -- common/autotest_common.sh@855 -- # local i 00:34:57.426 02:04:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:57.426 02:04:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:57.426 02:04:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:34:57.426 02:04:57 -- common/autotest_common.sh@859 -- # break 00:34:57.426 02:04:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:57.426 02:04:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:57.426 02:04:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:57.426 1+0 records in 00:34:57.426 1+0 records out 00:34:57.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290052 s, 14.1 MB/s 00:34:57.426 02:04:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:57.426 02:04:57 -- common/autotest_common.sh@872 -- # size=4096 00:34:57.426 02:04:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:57.426 02:04:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:57.426 02:04:57 -- common/autotest_common.sh@875 -- # return 0 00:34:57.426 02:04:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:57.426 02:04:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:57.426 02:04:57 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:34:57.426 02:04:57 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:34:57.426 02:04:57 -- bdev/bdev_raid.sh@582 -- # echo 128 00:34:57.426 02:04:57 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:34:57.994 512+0 records in 00:34:57.994 512+0 records out 00:34:57.994 67108864 bytes (67 MB, 64 MiB) copied, 0.463994 s, 145 MB/s 00:34:57.994 02:04:57 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@51 -- # local i 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:57.994 02:04:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:58.252 [2024-04-24 02:04:58.244023] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@41 -- # break 00:34:58.252 02:04:58 -- bdev/nbd_common.sh@45 -- # return 0 00:34:58.252 02:04:58 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:58.510 [2024-04-24 02:04:58.520666] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.510 02:04:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.770 02:04:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:58.770 "name": "raid_bdev1", 00:34:58.770 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:34:58.770 "strip_size_kb": 64, 00:34:58.770 "state": "online", 00:34:58.770 "raid_level": "raid5f", 00:34:58.770 "superblock": false, 00:34:58.770 "num_base_bdevs": 3, 00:34:58.770 "num_base_bdevs_discovered": 2, 00:34:58.770 "num_base_bdevs_operational": 2, 00:34:58.770 "base_bdevs_list": [ 00:34:58.770 { 00:34:58.770 "name": null, 00:34:58.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.770 "is_configured": false, 00:34:58.770 "data_offset": 0, 00:34:58.770 "data_size": 65536 00:34:58.770 }, 00:34:58.770 { 00:34:58.770 "name": "BaseBdev2", 00:34:58.770 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:34:58.770 "is_configured": true, 00:34:58.770 "data_offset": 0, 00:34:58.770 "data_size": 65536 00:34:58.770 }, 00:34:58.770 { 00:34:58.770 "name": "BaseBdev3", 00:34:58.770 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:34:58.770 "is_configured": true, 00:34:58.770 "data_offset": 0, 00:34:58.770 "data_size": 65536 00:34:58.770 } 00:34:58.770 ] 00:34:58.770 }' 00:34:58.770 02:04:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:58.770 02:04:58 -- common/autotest_common.sh@10 -- # set +x 00:34:59.848 02:04:59 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:59.848 [2024-04-24 02:04:59.768983] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:34:59.848 [2024-04-24 02:04:59.769057] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:59.848 [2024-04-24 02:04:59.793464] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:34:59.848 [2024-04-24 02:04:59.804263] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:59.848 02:04:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.783 02:05:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.040 02:05:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:01.040 "name": "raid_bdev1", 00:35:01.040 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:01.040 "strip_size_kb": 64, 00:35:01.040 "state": "online", 00:35:01.040 "raid_level": "raid5f", 00:35:01.040 "superblock": false, 00:35:01.040 "num_base_bdevs": 3, 00:35:01.040 "num_base_bdevs_discovered": 3, 00:35:01.040 "num_base_bdevs_operational": 3, 00:35:01.040 "process": { 00:35:01.040 "type": "rebuild", 00:35:01.040 "target": "spare", 00:35:01.040 "progress": { 00:35:01.040 "blocks": 24576, 00:35:01.040 "percent": 18 00:35:01.040 } 00:35:01.040 }, 00:35:01.040 "base_bdevs_list": [ 00:35:01.040 { 00:35:01.040 "name": "spare", 00:35:01.040 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:01.040 "is_configured": true, 00:35:01.040 "data_offset": 0, 00:35:01.040 "data_size": 65536 00:35:01.040 }, 00:35:01.040 { 00:35:01.040 "name": "BaseBdev2", 00:35:01.040 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:01.040 "is_configured": true, 00:35:01.040 "data_offset": 0, 00:35:01.040 "data_size": 65536 00:35:01.040 }, 00:35:01.040 { 00:35:01.040 "name": "BaseBdev3", 00:35:01.040 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:01.040 "is_configured": true, 00:35:01.040 "data_offset": 0, 00:35:01.040 "data_size": 65536 00:35:01.040 } 00:35:01.040 ] 00:35:01.040 }' 00:35:01.040 02:05:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:01.299 02:05:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:01.299 02:05:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:01.299 02:05:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:01.299 02:05:01 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:01.557 [2024-04-24 02:05:01.390290] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:01.557 [2024-04-24 02:05:01.422594] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:01.557 [2024-04-24 02:05:01.422710] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.557 02:05:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.816 02:05:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:01.816 "name": "raid_bdev1", 00:35:01.816 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:01.816 "strip_size_kb": 64, 00:35:01.816 "state": "online", 00:35:01.816 "raid_level": "raid5f", 00:35:01.816 "superblock": false, 00:35:01.816 "num_base_bdevs": 3, 00:35:01.816 "num_base_bdevs_discovered": 2, 00:35:01.816 "num_base_bdevs_operational": 2, 00:35:01.816 "base_bdevs_list": [ 00:35:01.816 { 00:35:01.816 "name": null, 00:35:01.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.816 "is_configured": false, 00:35:01.816 "data_offset": 0, 00:35:01.816 "data_size": 65536 00:35:01.816 }, 00:35:01.816 { 00:35:01.816 "name": "BaseBdev2", 00:35:01.816 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:01.816 "is_configured": true, 00:35:01.816 "data_offset": 0, 00:35:01.816 "data_size": 65536 00:35:01.816 }, 00:35:01.816 { 00:35:01.816 "name": "BaseBdev3", 00:35:01.816 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:01.816 "is_configured": true, 00:35:01.816 "data_offset": 0, 00:35:01.816 "data_size": 65536 00:35:01.816 } 00:35:01.816 ] 00:35:01.816 }' 00:35:01.816 02:05:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:01.816 02:05:01 -- common/autotest_common.sh@10 -- # set +x 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:02.799 "name": "raid_bdev1", 00:35:02.799 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:02.799 "strip_size_kb": 64, 00:35:02.799 "state": "online", 00:35:02.799 "raid_level": "raid5f", 00:35:02.799 "superblock": false, 00:35:02.799 "num_base_bdevs": 3, 00:35:02.799 "num_base_bdevs_discovered": 2, 00:35:02.799 "num_base_bdevs_operational": 2, 00:35:02.799 "base_bdevs_list": [ 00:35:02.799 { 00:35:02.799 "name": null, 00:35:02.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.799 "is_configured": false, 00:35:02.799 "data_offset": 0, 00:35:02.799 "data_size": 65536 00:35:02.799 }, 00:35:02.799 { 00:35:02.799 "name": "BaseBdev2", 00:35:02.799 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:02.799 "is_configured": true, 00:35:02.799 "data_offset": 0, 00:35:02.799 "data_size": 65536 00:35:02.799 }, 00:35:02.799 { 00:35:02.799 "name": "BaseBdev3", 00:35:02.799 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:02.799 "is_configured": true, 00:35:02.799 "data_offset": 0, 00:35:02.799 "data_size": 65536 00:35:02.799 } 00:35:02.799 ] 00:35:02.799 }' 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:02.799 02:05:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:03.057 02:05:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:35:03.057 02:05:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:03.315 [2024-04-24 02:05:03.177275] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:35:03.315 [2024-04-24 02:05:03.177335] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:03.315 [2024-04-24 02:05:03.197797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:35:03.315 [2024-04-24 02:05:03.208342] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:03.315 02:05:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.246 02:05:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:04.503 02:05:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:04.503 "name": "raid_bdev1", 00:35:04.503 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:04.503 "strip_size_kb": 64, 00:35:04.503 "state": "online", 00:35:04.503 "raid_level": "raid5f", 00:35:04.503 "superblock": false, 00:35:04.503 "num_base_bdevs": 3, 00:35:04.503 "num_base_bdevs_discovered": 3, 00:35:04.503 "num_base_bdevs_operational": 3, 00:35:04.503 "process": { 00:35:04.503 "type": "rebuild", 00:35:04.503 "target": "spare", 00:35:04.503 "progress": { 00:35:04.503 "blocks": 24576, 00:35:04.503 "percent": 18 00:35:04.503 } 00:35:04.503 }, 00:35:04.503 "base_bdevs_list": [ 00:35:04.503 { 00:35:04.503 "name": "spare", 00:35:04.503 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:04.503 "is_configured": true, 00:35:04.503 "data_offset": 0, 00:35:04.503 "data_size": 65536 00:35:04.503 }, 00:35:04.503 { 00:35:04.503 "name": "BaseBdev2", 00:35:04.503 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:04.503 "is_configured": true, 00:35:04.503 "data_offset": 0, 00:35:04.503 "data_size": 65536 00:35:04.503 }, 00:35:04.503 { 00:35:04.503 "name": "BaseBdev3", 00:35:04.503 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:04.503 "is_configured": true, 00:35:04.503 "data_offset": 0, 00:35:04.503 "data_size": 65536 00:35:04.503 } 00:35:04.503 ] 00:35:04.503 }' 00:35:04.503 02:05:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:04.503 02:05:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:04.503 02:05:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@657 -- # local timeout=679 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.761 02:05:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.020 02:05:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:05.020 "name": "raid_bdev1", 00:35:05.020 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:05.020 "strip_size_kb": 64, 00:35:05.020 "state": "online", 00:35:05.020 "raid_level": "raid5f", 00:35:05.020 "superblock": false, 00:35:05.020 "num_base_bdevs": 3, 00:35:05.020 "num_base_bdevs_discovered": 3, 00:35:05.020 "num_base_bdevs_operational": 3, 00:35:05.020 "process": { 00:35:05.020 "type": "rebuild", 00:35:05.020 "target": "spare", 00:35:05.020 "progress": { 00:35:05.020 "blocks": 32768, 00:35:05.020 "percent": 25 00:35:05.020 } 00:35:05.020 }, 00:35:05.020 "base_bdevs_list": [ 00:35:05.020 { 00:35:05.020 "name": "spare", 00:35:05.020 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:05.020 "is_configured": true, 00:35:05.020 "data_offset": 0, 00:35:05.020 "data_size": 65536 00:35:05.020 }, 00:35:05.020 { 00:35:05.020 "name": "BaseBdev2", 00:35:05.020 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:05.020 "is_configured": true, 00:35:05.020 "data_offset": 0, 00:35:05.020 "data_size": 65536 00:35:05.020 }, 00:35:05.020 { 00:35:05.020 "name": "BaseBdev3", 00:35:05.020 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:05.020 "is_configured": true, 00:35:05.020 "data_offset": 0, 00:35:05.020 "data_size": 65536 00:35:05.020 } 00:35:05.020 ] 00:35:05.020 }' 00:35:05.020 02:05:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:05.020 02:05:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:05.020 02:05:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:05.020 02:05:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:05.020 02:05:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.962 02:05:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.233 02:05:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:06.233 "name": "raid_bdev1", 00:35:06.234 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:06.234 "strip_size_kb": 64, 00:35:06.234 "state": "online", 00:35:06.234 "raid_level": "raid5f", 00:35:06.234 "superblock": false, 00:35:06.234 "num_base_bdevs": 3, 00:35:06.234 "num_base_bdevs_discovered": 3, 00:35:06.234 "num_base_bdevs_operational": 3, 00:35:06.234 "process": { 00:35:06.234 "type": "rebuild", 00:35:06.234 "target": "spare", 00:35:06.234 "progress": { 00:35:06.234 "blocks": 59392, 00:35:06.234 "percent": 45 00:35:06.234 } 00:35:06.234 }, 00:35:06.234 "base_bdevs_list": [ 00:35:06.234 { 00:35:06.234 "name": "spare", 00:35:06.234 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:06.234 "is_configured": true, 00:35:06.234 "data_offset": 0, 00:35:06.234 "data_size": 65536 00:35:06.234 }, 00:35:06.234 { 00:35:06.234 "name": "BaseBdev2", 00:35:06.234 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:06.234 "is_configured": true, 00:35:06.234 "data_offset": 0, 00:35:06.234 "data_size": 65536 00:35:06.234 }, 00:35:06.234 { 00:35:06.234 "name": "BaseBdev3", 00:35:06.234 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:06.234 "is_configured": true, 00:35:06.234 "data_offset": 0, 00:35:06.234 "data_size": 65536 00:35:06.234 } 00:35:06.234 ] 00:35:06.234 }' 00:35:06.234 02:05:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:06.234 02:05:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:06.234 02:05:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:06.492 02:05:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:06.492 02:05:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.426 02:05:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:07.684 "name": "raid_bdev1", 00:35:07.684 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:07.684 "strip_size_kb": 64, 00:35:07.684 "state": "online", 00:35:07.684 "raid_level": "raid5f", 00:35:07.684 "superblock": false, 00:35:07.684 "num_base_bdevs": 3, 00:35:07.684 "num_base_bdevs_discovered": 3, 00:35:07.684 "num_base_bdevs_operational": 3, 00:35:07.684 "process": { 00:35:07.684 "type": "rebuild", 00:35:07.684 "target": "spare", 00:35:07.684 "progress": { 00:35:07.684 "blocks": 86016, 00:35:07.684 "percent": 65 00:35:07.684 } 00:35:07.684 }, 00:35:07.684 "base_bdevs_list": [ 00:35:07.684 { 00:35:07.684 "name": "spare", 00:35:07.684 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:07.684 "is_configured": true, 00:35:07.684 "data_offset": 0, 00:35:07.684 "data_size": 65536 00:35:07.684 }, 00:35:07.684 { 00:35:07.684 "name": "BaseBdev2", 00:35:07.684 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:07.684 "is_configured": true, 00:35:07.684 "data_offset": 0, 00:35:07.684 "data_size": 65536 00:35:07.684 }, 00:35:07.684 { 00:35:07.684 "name": "BaseBdev3", 00:35:07.684 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:07.684 "is_configured": true, 00:35:07.684 "data_offset": 0, 00:35:07.684 "data_size": 65536 00:35:07.684 } 00:35:07.684 ] 00:35:07.684 }' 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:07.684 02:05:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.617 02:05:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:08.874 "name": "raid_bdev1", 00:35:08.874 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:08.874 "strip_size_kb": 64, 00:35:08.874 "state": "online", 00:35:08.874 "raid_level": "raid5f", 00:35:08.874 "superblock": false, 00:35:08.874 "num_base_bdevs": 3, 00:35:08.874 "num_base_bdevs_discovered": 3, 00:35:08.874 "num_base_bdevs_operational": 3, 00:35:08.874 "process": { 00:35:08.874 "type": "rebuild", 00:35:08.874 "target": "spare", 00:35:08.874 "progress": { 00:35:08.874 "blocks": 112640, 00:35:08.874 "percent": 85 00:35:08.874 } 00:35:08.874 }, 00:35:08.874 "base_bdevs_list": [ 00:35:08.874 { 00:35:08.874 "name": "spare", 00:35:08.874 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:08.874 "is_configured": true, 00:35:08.874 "data_offset": 0, 00:35:08.874 "data_size": 65536 00:35:08.874 }, 00:35:08.874 { 00:35:08.874 "name": "BaseBdev2", 00:35:08.874 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:08.874 "is_configured": true, 00:35:08.874 "data_offset": 0, 00:35:08.874 "data_size": 65536 00:35:08.874 }, 00:35:08.874 { 00:35:08.874 "name": "BaseBdev3", 00:35:08.874 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:08.874 "is_configured": true, 00:35:08.874 "data_offset": 0, 00:35:08.874 "data_size": 65536 00:35:08.874 } 00:35:08.874 ] 00:35:08.874 }' 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:08.874 02:05:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:09.835 [2024-04-24 02:05:09.674716] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:09.835 [2024-04-24 02:05:09.674817] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:09.835 [2024-04-24 02:05:09.674915] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:10.093 02:05:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:10.093 02:05:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:10.093 02:05:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:10.093 02:05:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:10.094 02:05:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:10.094 02:05:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:10.094 02:05:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.094 02:05:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.350 02:05:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:10.350 "name": "raid_bdev1", 00:35:10.350 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:10.350 "strip_size_kb": 64, 00:35:10.350 "state": "online", 00:35:10.350 "raid_level": "raid5f", 00:35:10.350 "superblock": false, 00:35:10.350 "num_base_bdevs": 3, 00:35:10.350 "num_base_bdevs_discovered": 3, 00:35:10.350 "num_base_bdevs_operational": 3, 00:35:10.351 "base_bdevs_list": [ 00:35:10.351 { 00:35:10.351 "name": "spare", 00:35:10.351 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:10.351 "is_configured": true, 00:35:10.351 "data_offset": 0, 00:35:10.351 "data_size": 65536 00:35:10.351 }, 00:35:10.351 { 00:35:10.351 "name": "BaseBdev2", 00:35:10.351 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:10.351 "is_configured": true, 00:35:10.351 "data_offset": 0, 00:35:10.351 "data_size": 65536 00:35:10.351 }, 00:35:10.351 { 00:35:10.351 "name": "BaseBdev3", 00:35:10.351 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:10.351 "is_configured": true, 00:35:10.351 "data_offset": 0, 00:35:10.351 "data_size": 65536 00:35:10.351 } 00:35:10.351 ] 00:35:10.351 }' 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@660 -- # break 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.351 02:05:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:10.609 "name": "raid_bdev1", 00:35:10.609 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:10.609 "strip_size_kb": 64, 00:35:10.609 "state": "online", 00:35:10.609 "raid_level": "raid5f", 00:35:10.609 "superblock": false, 00:35:10.609 "num_base_bdevs": 3, 00:35:10.609 "num_base_bdevs_discovered": 3, 00:35:10.609 "num_base_bdevs_operational": 3, 00:35:10.609 "base_bdevs_list": [ 00:35:10.609 { 00:35:10.609 "name": "spare", 00:35:10.609 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:10.609 "is_configured": true, 00:35:10.609 "data_offset": 0, 00:35:10.609 "data_size": 65536 00:35:10.609 }, 00:35:10.609 { 00:35:10.609 "name": "BaseBdev2", 00:35:10.609 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:10.609 "is_configured": true, 00:35:10.609 "data_offset": 0, 00:35:10.609 "data_size": 65536 00:35:10.609 }, 00:35:10.609 { 00:35:10.609 "name": "BaseBdev3", 00:35:10.609 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:10.609 "is_configured": true, 00:35:10.609 "data_offset": 0, 00:35:10.609 "data_size": 65536 00:35:10.609 } 00:35:10.609 ] 00:35:10.609 }' 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.609 02:05:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.867 02:05:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:10.867 "name": "raid_bdev1", 00:35:10.867 "uuid": "b1b78adb-f402-47c8-8435-63a4f5a0c9b6", 00:35:10.867 "strip_size_kb": 64, 00:35:10.867 "state": "online", 00:35:10.867 "raid_level": "raid5f", 00:35:10.867 "superblock": false, 00:35:10.867 "num_base_bdevs": 3, 00:35:10.867 "num_base_bdevs_discovered": 3, 00:35:10.867 "num_base_bdevs_operational": 3, 00:35:10.867 "base_bdevs_list": [ 00:35:10.867 { 00:35:10.867 "name": "spare", 00:35:10.867 "uuid": "ed1e5c79-8540-5720-b769-486fd111a43f", 00:35:10.867 "is_configured": true, 00:35:10.867 "data_offset": 0, 00:35:10.867 "data_size": 65536 00:35:10.867 }, 00:35:10.867 { 00:35:10.867 "name": "BaseBdev2", 00:35:10.867 "uuid": "7ecb85e9-57f7-441d-a5eb-550852b18a6d", 00:35:10.867 "is_configured": true, 00:35:10.867 "data_offset": 0, 00:35:10.867 "data_size": 65536 00:35:10.867 }, 00:35:10.867 { 00:35:10.867 "name": "BaseBdev3", 00:35:10.867 "uuid": "26e02ae2-6559-4b35-9c89-c53bff12484d", 00:35:10.867 "is_configured": true, 00:35:10.867 "data_offset": 0, 00:35:10.867 "data_size": 65536 00:35:10.867 } 00:35:10.867 ] 00:35:10.867 }' 00:35:10.867 02:05:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:10.867 02:05:10 -- common/autotest_common.sh@10 -- # set +x 00:35:11.801 02:05:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:11.801 [2024-04-24 02:05:11.748141] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:11.801 [2024-04-24 02:05:11.748194] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:11.801 [2024-04-24 02:05:11.748281] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:11.801 [2024-04-24 02:05:11.748364] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:11.801 [2024-04-24 02:05:11.748377] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:35:11.801 02:05:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:35:11.801 02:05:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.058 02:05:12 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:35:12.058 02:05:12 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:35:12.058 02:05:12 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:12.058 02:05:12 -- bdev/nbd_common.sh@12 -- # local i 00:35:12.059 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:12.059 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.059 02:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:12.317 /dev/nbd0 00:35:12.317 02:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:12.317 02:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:12.317 02:05:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:12.317 02:05:12 -- common/autotest_common.sh@855 -- # local i 00:35:12.317 02:05:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:12.317 02:05:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:12.317 02:05:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:12.317 02:05:12 -- common/autotest_common.sh@859 -- # break 00:35:12.317 02:05:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:12.317 02:05:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:12.317 02:05:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:12.317 1+0 records in 00:35:12.317 1+0 records out 00:35:12.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421001 s, 9.7 MB/s 00:35:12.317 02:05:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.317 02:05:12 -- common/autotest_common.sh@872 -- # size=4096 00:35:12.317 02:05:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.317 02:05:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:12.317 02:05:12 -- common/autotest_common.sh@875 -- # return 0 00:35:12.317 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:12.317 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.317 02:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:12.575 /dev/nbd1 00:35:12.575 02:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:12.575 02:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:12.575 02:05:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:35:12.575 02:05:12 -- common/autotest_common.sh@855 -- # local i 00:35:12.575 02:05:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:12.575 02:05:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:12.575 02:05:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:35:12.575 02:05:12 -- common/autotest_common.sh@859 -- # break 00:35:12.575 02:05:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:12.575 02:05:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:12.575 02:05:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:12.575 1+0 records in 00:35:12.575 1+0 records out 00:35:12.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439649 s, 9.3 MB/s 00:35:12.575 02:05:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.575 02:05:12 -- common/autotest_common.sh@872 -- # size=4096 00:35:12.575 02:05:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.575 02:05:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:12.575 02:05:12 -- common/autotest_common.sh@875 -- # return 0 00:35:12.575 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:12.575 02:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.575 02:05:12 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:12.833 02:05:12 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@51 -- # local i 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:12.833 02:05:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@41 -- # break 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@45 -- # return 0 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:13.091 02:05:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@41 -- # break 00:35:13.656 02:05:13 -- bdev/nbd_common.sh@45 -- # return 0 00:35:13.656 02:05:13 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:35:13.656 02:05:13 -- bdev/bdev_raid.sh@709 -- # killprocess 137369 00:35:13.656 02:05:13 -- common/autotest_common.sh@936 -- # '[' -z 137369 ']' 00:35:13.656 02:05:13 -- common/autotest_common.sh@940 -- # kill -0 137369 00:35:13.656 02:05:13 -- common/autotest_common.sh@941 -- # uname 00:35:13.656 02:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:13.656 02:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137369 00:35:13.656 02:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:13.656 02:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:13.656 02:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137369' 00:35:13.656 killing process with pid 137369 00:35:13.656 02:05:13 -- common/autotest_common.sh@955 -- # kill 137369 00:35:13.656 Received shutdown signal, test time was about 60.000000 seconds 00:35:13.656 00:35:13.656 Latency(us) 00:35:13.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.656 =================================================================================================================== 00:35:13.656 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:13.656 02:05:13 -- common/autotest_common.sh@960 -- # wait 137369 00:35:13.656 [2024-04-24 02:05:13.502060] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:13.913 [2024-04-24 02:05:13.972139] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:15.814 ************************************ 00:35:15.814 END TEST raid5f_rebuild_test 00:35:15.814 ************************************ 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:35:15.814 00:35:15.814 real 0m22.936s 00:35:15.814 user 0m33.879s 00:35:15.814 sys 0m3.169s 00:35:15.814 02:05:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:15.814 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:35:15.814 02:05:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:35:15.814 02:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:15.814 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:35:15.814 ************************************ 00:35:15.814 START TEST raid5f_rebuild_test_sb 00:35:15.814 ************************************ 00:35:15.814 02:05:15 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 true false 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=137932 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:15.814 02:05:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137932 /var/tmp/spdk-raid.sock 00:35:15.814 02:05:15 -- common/autotest_common.sh@817 -- # '[' -z 137932 ']' 00:35:15.814 02:05:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:15.814 02:05:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:15.814 02:05:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:15.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:15.814 02:05:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:15.814 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:35:15.814 [2024-04-24 02:05:15.708583] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:35:15.814 [2024-04-24 02:05:15.708945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137932 ] 00:35:15.814 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:15.814 Zero copy mechanism will not be used. 00:35:15.814 [2024-04-24 02:05:15.886500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.073 [2024-04-24 02:05:16.143171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.331 [2024-04-24 02:05:16.409466] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:16.590 02:05:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:16.590 02:05:16 -- common/autotest_common.sh@850 -- # return 0 00:35:16.590 02:05:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:35:16.590 02:05:16 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:35:16.590 02:05:16 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:17.154 BaseBdev1_malloc 00:35:17.154 02:05:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:17.412 [2024-04-24 02:05:17.239995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:17.412 [2024-04-24 02:05:17.240284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.412 [2024-04-24 02:05:17.240432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:35:17.412 [2024-04-24 02:05:17.240592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.412 [2024-04-24 02:05:17.243486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.412 [2024-04-24 02:05:17.243669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:17.412 BaseBdev1 00:35:17.412 02:05:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:35:17.412 02:05:17 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:35:17.412 02:05:17 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:17.669 BaseBdev2_malloc 00:35:17.669 02:05:17 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:17.926 [2024-04-24 02:05:17.819230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:17.927 [2024-04-24 02:05:17.819527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.927 [2024-04-24 02:05:17.819668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:17.927 [2024-04-24 02:05:17.819816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.927 [2024-04-24 02:05:17.822493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.927 [2024-04-24 02:05:17.822676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:17.927 BaseBdev2 00:35:17.927 02:05:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:35:17.927 02:05:17 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:35:17.927 02:05:17 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:18.185 BaseBdev3_malloc 00:35:18.185 02:05:18 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:18.443 [2024-04-24 02:05:18.359894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:18.443 [2024-04-24 02:05:18.360220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.443 [2024-04-24 02:05:18.360298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:35:18.443 [2024-04-24 02:05:18.360423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.443 [2024-04-24 02:05:18.363053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.443 [2024-04-24 02:05:18.363241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:18.443 BaseBdev3 00:35:18.443 02:05:18 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:18.701 spare_malloc 00:35:18.701 02:05:18 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:18.959 spare_delay 00:35:18.959 02:05:18 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:19.217 [2024-04-24 02:05:19.161971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:19.217 [2024-04-24 02:05:19.162263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.217 [2024-04-24 02:05:19.162341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:19.217 [2024-04-24 02:05:19.162645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.217 [2024-04-24 02:05:19.165314] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.217 [2024-04-24 02:05:19.165496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:19.217 spare 00:35:19.217 02:05:19 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:35:19.475 [2024-04-24 02:05:19.378093] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:19.475 [2024-04-24 02:05:19.380480] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:19.475 [2024-04-24 02:05:19.380683] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:19.475 [2024-04-24 02:05:19.380926] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:35:19.475 [2024-04-24 02:05:19.381026] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:19.475 [2024-04-24 02:05:19.381207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:35:19.475 [2024-04-24 02:05:19.388413] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:35:19.475 [2024-04-24 02:05:19.388552] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:35:19.475 [2024-04-24 02:05:19.388846] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.475 02:05:19 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:19.475 02:05:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:19.475 02:05:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:19.475 02:05:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:19.475 02:05:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.476 02:05:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.733 02:05:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:19.734 "name": "raid_bdev1", 00:35:19.734 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:19.734 "strip_size_kb": 64, 00:35:19.734 "state": "online", 00:35:19.734 "raid_level": "raid5f", 00:35:19.734 "superblock": true, 00:35:19.734 "num_base_bdevs": 3, 00:35:19.734 "num_base_bdevs_discovered": 3, 00:35:19.734 "num_base_bdevs_operational": 3, 00:35:19.734 "base_bdevs_list": [ 00:35:19.734 { 00:35:19.734 "name": "BaseBdev1", 00:35:19.734 "uuid": "f524b44f-d27c-5651-91c3-4ab910b68306", 00:35:19.734 "is_configured": true, 00:35:19.734 "data_offset": 2048, 00:35:19.734 "data_size": 63488 00:35:19.734 }, 00:35:19.734 { 00:35:19.734 "name": "BaseBdev2", 00:35:19.734 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:19.734 "is_configured": true, 00:35:19.734 "data_offset": 2048, 00:35:19.734 "data_size": 63488 00:35:19.734 }, 00:35:19.734 { 00:35:19.734 "name": "BaseBdev3", 00:35:19.734 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:19.734 "is_configured": true, 00:35:19.734 "data_offset": 2048, 00:35:19.734 "data_size": 63488 00:35:19.734 } 00:35:19.734 ] 00:35:19.734 }' 00:35:19.734 02:05:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:19.734 02:05:19 -- common/autotest_common.sh@10 -- # set +x 00:35:20.298 02:05:20 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:20.298 02:05:20 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:35:20.574 [2024-04-24 02:05:20.401128] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.574 02:05:20 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:35:20.574 02:05:20 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.574 02:05:20 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:20.830 02:05:20 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:35:20.830 02:05:20 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:35:20.831 02:05:20 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:35:20.831 02:05:20 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@12 -- # local i 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:20.831 02:05:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:21.089 [2024-04-24 02:05:20.921062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:21.089 /dev/nbd0 00:35:21.089 02:05:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:21.089 02:05:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:21.089 02:05:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:21.089 02:05:20 -- common/autotest_common.sh@855 -- # local i 00:35:21.089 02:05:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:21.089 02:05:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:21.089 02:05:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:21.089 02:05:20 -- common/autotest_common.sh@859 -- # break 00:35:21.089 02:05:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:21.089 02:05:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:21.089 02:05:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:21.089 1+0 records in 00:35:21.089 1+0 records out 00:35:21.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413463 s, 9.9 MB/s 00:35:21.089 02:05:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:21.089 02:05:20 -- common/autotest_common.sh@872 -- # size=4096 00:35:21.089 02:05:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:21.089 02:05:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:21.089 02:05:20 -- common/autotest_common.sh@875 -- # return 0 00:35:21.089 02:05:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:21.089 02:05:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:21.089 02:05:20 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:35:21.089 02:05:20 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:35:21.089 02:05:20 -- bdev/bdev_raid.sh@582 -- # echo 128 00:35:21.089 02:05:20 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:35:21.655 496+0 records in 00:35:21.655 496+0 records out 00:35:21.655 65011712 bytes (65 MB, 62 MiB) copied, 0.510326 s, 127 MB/s 00:35:21.655 02:05:21 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@51 -- # local i 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:21.655 02:05:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:21.914 [2024-04-24 02:05:21.830216] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@41 -- # break 00:35:21.914 02:05:21 -- bdev/nbd_common.sh@45 -- # return 0 00:35:21.914 02:05:21 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:22.172 [2024-04-24 02:05:22.095398] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.172 02:05:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.450 02:05:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:22.450 "name": "raid_bdev1", 00:35:22.450 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:22.450 "strip_size_kb": 64, 00:35:22.450 "state": "online", 00:35:22.450 "raid_level": "raid5f", 00:35:22.450 "superblock": true, 00:35:22.450 "num_base_bdevs": 3, 00:35:22.450 "num_base_bdevs_discovered": 2, 00:35:22.450 "num_base_bdevs_operational": 2, 00:35:22.450 "base_bdevs_list": [ 00:35:22.450 { 00:35:22.450 "name": null, 00:35:22.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:22.450 "is_configured": false, 00:35:22.450 "data_offset": 2048, 00:35:22.450 "data_size": 63488 00:35:22.450 }, 00:35:22.450 { 00:35:22.450 "name": "BaseBdev2", 00:35:22.450 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:22.450 "is_configured": true, 00:35:22.450 "data_offset": 2048, 00:35:22.450 "data_size": 63488 00:35:22.450 }, 00:35:22.450 { 00:35:22.450 "name": "BaseBdev3", 00:35:22.450 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:22.450 "is_configured": true, 00:35:22.450 "data_offset": 2048, 00:35:22.450 "data_size": 63488 00:35:22.450 } 00:35:22.450 ] 00:35:22.450 }' 00:35:22.450 02:05:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:22.450 02:05:22 -- common/autotest_common.sh@10 -- # set +x 00:35:23.019 02:05:22 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:23.276 [2024-04-24 02:05:23.162106] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:35:23.276 [2024-04-24 02:05:23.162409] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:23.276 [2024-04-24 02:05:23.186335] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:35:23.276 [2024-04-24 02:05:23.197233] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:23.276 02:05:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.209 02:05:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.467 02:05:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:24.467 "name": "raid_bdev1", 00:35:24.467 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:24.467 "strip_size_kb": 64, 00:35:24.467 "state": "online", 00:35:24.467 "raid_level": "raid5f", 00:35:24.467 "superblock": true, 00:35:24.467 "num_base_bdevs": 3, 00:35:24.467 "num_base_bdevs_discovered": 3, 00:35:24.467 "num_base_bdevs_operational": 3, 00:35:24.467 "process": { 00:35:24.467 "type": "rebuild", 00:35:24.467 "target": "spare", 00:35:24.467 "progress": { 00:35:24.467 "blocks": 24576, 00:35:24.467 "percent": 19 00:35:24.467 } 00:35:24.467 }, 00:35:24.467 "base_bdevs_list": [ 00:35:24.467 { 00:35:24.467 "name": "spare", 00:35:24.467 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:24.467 "is_configured": true, 00:35:24.467 "data_offset": 2048, 00:35:24.467 "data_size": 63488 00:35:24.467 }, 00:35:24.467 { 00:35:24.467 "name": "BaseBdev2", 00:35:24.467 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:24.467 "is_configured": true, 00:35:24.467 "data_offset": 2048, 00:35:24.467 "data_size": 63488 00:35:24.467 }, 00:35:24.467 { 00:35:24.467 "name": "BaseBdev3", 00:35:24.467 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:24.467 "is_configured": true, 00:35:24.467 "data_offset": 2048, 00:35:24.467 "data_size": 63488 00:35:24.467 } 00:35:24.467 ] 00:35:24.467 }' 00:35:24.467 02:05:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:24.467 02:05:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:24.468 02:05:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:24.468 02:05:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:24.468 02:05:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:24.725 [2024-04-24 02:05:24.771337] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:24.983 [2024-04-24 02:05:24.815362] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:24.983 [2024-04-24 02:05:24.815691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:24.983 02:05:24 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:24.983 02:05:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:24.983 02:05:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:24.983 02:05:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.984 02:05:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.241 02:05:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:25.241 "name": "raid_bdev1", 00:35:25.241 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:25.241 "strip_size_kb": 64, 00:35:25.241 "state": "online", 00:35:25.241 "raid_level": "raid5f", 00:35:25.241 "superblock": true, 00:35:25.241 "num_base_bdevs": 3, 00:35:25.241 "num_base_bdevs_discovered": 2, 00:35:25.241 "num_base_bdevs_operational": 2, 00:35:25.241 "base_bdevs_list": [ 00:35:25.241 { 00:35:25.241 "name": null, 00:35:25.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.242 "is_configured": false, 00:35:25.242 "data_offset": 2048, 00:35:25.242 "data_size": 63488 00:35:25.242 }, 00:35:25.242 { 00:35:25.242 "name": "BaseBdev2", 00:35:25.242 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:25.242 "is_configured": true, 00:35:25.242 "data_offset": 2048, 00:35:25.242 "data_size": 63488 00:35:25.242 }, 00:35:25.242 { 00:35:25.242 "name": "BaseBdev3", 00:35:25.242 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:25.242 "is_configured": true, 00:35:25.242 "data_offset": 2048, 00:35:25.242 "data_size": 63488 00:35:25.242 } 00:35:25.242 ] 00:35:25.242 }' 00:35:25.242 02:05:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:25.242 02:05:25 -- common/autotest_common.sh@10 -- # set +x 00:35:25.807 02:05:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.808 02:05:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.065 02:05:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:26.066 "name": "raid_bdev1", 00:35:26.066 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:26.066 "strip_size_kb": 64, 00:35:26.066 "state": "online", 00:35:26.066 "raid_level": "raid5f", 00:35:26.066 "superblock": true, 00:35:26.066 "num_base_bdevs": 3, 00:35:26.066 "num_base_bdevs_discovered": 2, 00:35:26.066 "num_base_bdevs_operational": 2, 00:35:26.066 "base_bdevs_list": [ 00:35:26.066 { 00:35:26.066 "name": null, 00:35:26.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.066 "is_configured": false, 00:35:26.066 "data_offset": 2048, 00:35:26.066 "data_size": 63488 00:35:26.066 }, 00:35:26.066 { 00:35:26.066 "name": "BaseBdev2", 00:35:26.066 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:26.066 "is_configured": true, 00:35:26.066 "data_offset": 2048, 00:35:26.066 "data_size": 63488 00:35:26.066 }, 00:35:26.066 { 00:35:26.066 "name": "BaseBdev3", 00:35:26.066 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:26.066 "is_configured": true, 00:35:26.066 "data_offset": 2048, 00:35:26.066 "data_size": 63488 00:35:26.066 } 00:35:26.066 ] 00:35:26.066 }' 00:35:26.066 02:05:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:26.066 02:05:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:26.066 02:05:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:26.066 02:05:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:35:26.066 02:05:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:26.324 [2024-04-24 02:05:26.393500] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:35:26.324 [2024-04-24 02:05:26.393808] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:26.582 [2024-04-24 02:05:26.414239] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:35:26.582 [2024-04-24 02:05:26.425239] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:26.582 02:05:26 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.536 02:05:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:27.794 "name": "raid_bdev1", 00:35:27.794 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:27.794 "strip_size_kb": 64, 00:35:27.794 "state": "online", 00:35:27.794 "raid_level": "raid5f", 00:35:27.794 "superblock": true, 00:35:27.794 "num_base_bdevs": 3, 00:35:27.794 "num_base_bdevs_discovered": 3, 00:35:27.794 "num_base_bdevs_operational": 3, 00:35:27.794 "process": { 00:35:27.794 "type": "rebuild", 00:35:27.794 "target": "spare", 00:35:27.794 "progress": { 00:35:27.794 "blocks": 24576, 00:35:27.794 "percent": 19 00:35:27.794 } 00:35:27.794 }, 00:35:27.794 "base_bdevs_list": [ 00:35:27.794 { 00:35:27.794 "name": "spare", 00:35:27.794 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:27.794 "is_configured": true, 00:35:27.794 "data_offset": 2048, 00:35:27.794 "data_size": 63488 00:35:27.794 }, 00:35:27.794 { 00:35:27.794 "name": "BaseBdev2", 00:35:27.794 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:27.794 "is_configured": true, 00:35:27.794 "data_offset": 2048, 00:35:27.794 "data_size": 63488 00:35:27.794 }, 00:35:27.794 { 00:35:27.794 "name": "BaseBdev3", 00:35:27.794 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:27.794 "is_configured": true, 00:35:27.794 "data_offset": 2048, 00:35:27.794 "data_size": 63488 00:35:27.794 } 00:35:27.794 ] 00:35:27.794 }' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:35:27.794 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@657 -- # local timeout=702 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.794 02:05:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.052 02:05:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:28.052 "name": "raid_bdev1", 00:35:28.052 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:28.052 "strip_size_kb": 64, 00:35:28.052 "state": "online", 00:35:28.052 "raid_level": "raid5f", 00:35:28.052 "superblock": true, 00:35:28.052 "num_base_bdevs": 3, 00:35:28.052 "num_base_bdevs_discovered": 3, 00:35:28.052 "num_base_bdevs_operational": 3, 00:35:28.052 "process": { 00:35:28.052 "type": "rebuild", 00:35:28.052 "target": "spare", 00:35:28.052 "progress": { 00:35:28.052 "blocks": 32768, 00:35:28.052 "percent": 25 00:35:28.052 } 00:35:28.052 }, 00:35:28.052 "base_bdevs_list": [ 00:35:28.052 { 00:35:28.052 "name": "spare", 00:35:28.052 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:28.052 "is_configured": true, 00:35:28.052 "data_offset": 2048, 00:35:28.052 "data_size": 63488 00:35:28.052 }, 00:35:28.052 { 00:35:28.052 "name": "BaseBdev2", 00:35:28.052 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:28.052 "is_configured": true, 00:35:28.052 "data_offset": 2048, 00:35:28.052 "data_size": 63488 00:35:28.052 }, 00:35:28.052 { 00:35:28.052 "name": "BaseBdev3", 00:35:28.052 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:28.052 "is_configured": true, 00:35:28.052 "data_offset": 2048, 00:35:28.052 "data_size": 63488 00:35:28.052 } 00:35:28.052 ] 00:35:28.052 }' 00:35:28.052 02:05:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:28.309 02:05:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.310 02:05:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:28.310 02:05:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.310 02:05:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:29.243 02:05:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.501 02:05:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:29.501 "name": "raid_bdev1", 00:35:29.501 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:29.501 "strip_size_kb": 64, 00:35:29.501 "state": "online", 00:35:29.501 "raid_level": "raid5f", 00:35:29.501 "superblock": true, 00:35:29.501 "num_base_bdevs": 3, 00:35:29.501 "num_base_bdevs_discovered": 3, 00:35:29.501 "num_base_bdevs_operational": 3, 00:35:29.501 "process": { 00:35:29.501 "type": "rebuild", 00:35:29.501 "target": "spare", 00:35:29.501 "progress": { 00:35:29.501 "blocks": 61440, 00:35:29.501 "percent": 48 00:35:29.501 } 00:35:29.501 }, 00:35:29.501 "base_bdevs_list": [ 00:35:29.501 { 00:35:29.501 "name": "spare", 00:35:29.501 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:29.501 "is_configured": true, 00:35:29.501 "data_offset": 2048, 00:35:29.501 "data_size": 63488 00:35:29.501 }, 00:35:29.501 { 00:35:29.501 "name": "BaseBdev2", 00:35:29.501 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:29.501 "is_configured": true, 00:35:29.501 "data_offset": 2048, 00:35:29.501 "data_size": 63488 00:35:29.501 }, 00:35:29.501 { 00:35:29.501 "name": "BaseBdev3", 00:35:29.501 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:29.501 "is_configured": true, 00:35:29.501 "data_offset": 2048, 00:35:29.501 "data_size": 63488 00:35:29.501 } 00:35:29.501 ] 00:35:29.501 }' 00:35:29.501 02:05:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:29.501 02:05:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:29.501 02:05:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:29.761 02:05:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:29.761 02:05:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.718 02:05:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.976 02:05:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:30.976 "name": "raid_bdev1", 00:35:30.976 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:30.976 "strip_size_kb": 64, 00:35:30.976 "state": "online", 00:35:30.976 "raid_level": "raid5f", 00:35:30.976 "superblock": true, 00:35:30.976 "num_base_bdevs": 3, 00:35:30.976 "num_base_bdevs_discovered": 3, 00:35:30.976 "num_base_bdevs_operational": 3, 00:35:30.976 "process": { 00:35:30.976 "type": "rebuild", 00:35:30.976 "target": "spare", 00:35:30.976 "progress": { 00:35:30.976 "blocks": 90112, 00:35:30.976 "percent": 70 00:35:30.976 } 00:35:30.976 }, 00:35:30.976 "base_bdevs_list": [ 00:35:30.976 { 00:35:30.976 "name": "spare", 00:35:30.976 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:30.976 "is_configured": true, 00:35:30.976 "data_offset": 2048, 00:35:30.976 "data_size": 63488 00:35:30.976 }, 00:35:30.976 { 00:35:30.976 "name": "BaseBdev2", 00:35:30.976 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:30.976 "is_configured": true, 00:35:30.976 "data_offset": 2048, 00:35:30.976 "data_size": 63488 00:35:30.976 }, 00:35:30.976 { 00:35:30.976 "name": "BaseBdev3", 00:35:30.976 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:30.976 "is_configured": true, 00:35:30.976 "data_offset": 2048, 00:35:30.976 "data_size": 63488 00:35:30.976 } 00:35:30.976 ] 00:35:30.976 }' 00:35:30.976 02:05:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:30.976 02:05:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:30.976 02:05:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:30.976 02:05:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:30.976 02:05:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:32.349 "name": "raid_bdev1", 00:35:32.349 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:32.349 "strip_size_kb": 64, 00:35:32.349 "state": "online", 00:35:32.349 "raid_level": "raid5f", 00:35:32.349 "superblock": true, 00:35:32.349 "num_base_bdevs": 3, 00:35:32.349 "num_base_bdevs_discovered": 3, 00:35:32.349 "num_base_bdevs_operational": 3, 00:35:32.349 "process": { 00:35:32.349 "type": "rebuild", 00:35:32.349 "target": "spare", 00:35:32.349 "progress": { 00:35:32.349 "blocks": 118784, 00:35:32.349 "percent": 93 00:35:32.349 } 00:35:32.349 }, 00:35:32.349 "base_bdevs_list": [ 00:35:32.349 { 00:35:32.349 "name": "spare", 00:35:32.349 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:32.349 "is_configured": true, 00:35:32.349 "data_offset": 2048, 00:35:32.349 "data_size": 63488 00:35:32.349 }, 00:35:32.349 { 00:35:32.349 "name": "BaseBdev2", 00:35:32.349 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:32.349 "is_configured": true, 00:35:32.349 "data_offset": 2048, 00:35:32.349 "data_size": 63488 00:35:32.349 }, 00:35:32.349 { 00:35:32.349 "name": "BaseBdev3", 00:35:32.349 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:32.349 "is_configured": true, 00:35:32.349 "data_offset": 2048, 00:35:32.349 "data_size": 63488 00:35:32.349 } 00:35:32.349 ] 00:35:32.349 }' 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:32.349 02:05:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:32.607 02:05:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:35:32.607 02:05:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:32.865 [2024-04-24 02:05:32.696027] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:32.865 [2024-04-24 02:05:32.696416] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:32.865 [2024-04-24 02:05:32.696753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.431 02:05:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:33.998 "name": "raid_bdev1", 00:35:33.998 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:33.998 "strip_size_kb": 64, 00:35:33.998 "state": "online", 00:35:33.998 "raid_level": "raid5f", 00:35:33.998 "superblock": true, 00:35:33.998 "num_base_bdevs": 3, 00:35:33.998 "num_base_bdevs_discovered": 3, 00:35:33.998 "num_base_bdevs_operational": 3, 00:35:33.998 "base_bdevs_list": [ 00:35:33.998 { 00:35:33.998 "name": "spare", 00:35:33.998 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:33.998 "is_configured": true, 00:35:33.998 "data_offset": 2048, 00:35:33.998 "data_size": 63488 00:35:33.998 }, 00:35:33.998 { 00:35:33.998 "name": "BaseBdev2", 00:35:33.998 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:33.998 "is_configured": true, 00:35:33.998 "data_offset": 2048, 00:35:33.998 "data_size": 63488 00:35:33.998 }, 00:35:33.998 { 00:35:33.998 "name": "BaseBdev3", 00:35:33.998 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:33.998 "is_configured": true, 00:35:33.998 "data_offset": 2048, 00:35:33.998 "data_size": 63488 00:35:33.998 } 00:35:33.998 ] 00:35:33.998 }' 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@660 -- # break 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.998 02:05:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:34.256 02:05:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:34.256 "name": "raid_bdev1", 00:35:34.256 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:34.256 "strip_size_kb": 64, 00:35:34.256 "state": "online", 00:35:34.256 "raid_level": "raid5f", 00:35:34.256 "superblock": true, 00:35:34.256 "num_base_bdevs": 3, 00:35:34.256 "num_base_bdevs_discovered": 3, 00:35:34.256 "num_base_bdevs_operational": 3, 00:35:34.256 "base_bdevs_list": [ 00:35:34.256 { 00:35:34.256 "name": "spare", 00:35:34.256 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:34.256 "is_configured": true, 00:35:34.256 "data_offset": 2048, 00:35:34.256 "data_size": 63488 00:35:34.256 }, 00:35:34.256 { 00:35:34.256 "name": "BaseBdev2", 00:35:34.257 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:34.257 "is_configured": true, 00:35:34.257 "data_offset": 2048, 00:35:34.257 "data_size": 63488 00:35:34.257 }, 00:35:34.257 { 00:35:34.257 "name": "BaseBdev3", 00:35:34.257 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:34.257 "is_configured": true, 00:35:34.257 "data_offset": 2048, 00:35:34.257 "data_size": 63488 00:35:34.257 } 00:35:34.257 ] 00:35:34.257 }' 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:34.257 02:05:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:34.514 02:05:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:34.514 "name": "raid_bdev1", 00:35:34.514 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:34.514 "strip_size_kb": 64, 00:35:34.514 "state": "online", 00:35:34.514 "raid_level": "raid5f", 00:35:34.514 "superblock": true, 00:35:34.514 "num_base_bdevs": 3, 00:35:34.514 "num_base_bdevs_discovered": 3, 00:35:34.514 "num_base_bdevs_operational": 3, 00:35:34.514 "base_bdevs_list": [ 00:35:34.514 { 00:35:34.514 "name": "spare", 00:35:34.514 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:34.514 "is_configured": true, 00:35:34.514 "data_offset": 2048, 00:35:34.514 "data_size": 63488 00:35:34.514 }, 00:35:34.514 { 00:35:34.514 "name": "BaseBdev2", 00:35:34.514 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:34.514 "is_configured": true, 00:35:34.514 "data_offset": 2048, 00:35:34.514 "data_size": 63488 00:35:34.514 }, 00:35:34.514 { 00:35:34.514 "name": "BaseBdev3", 00:35:34.514 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:34.514 "is_configured": true, 00:35:34.514 "data_offset": 2048, 00:35:34.514 "data_size": 63488 00:35:34.514 } 00:35:34.514 ] 00:35:34.514 }' 00:35:34.514 02:05:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:34.514 02:05:34 -- common/autotest_common.sh@10 -- # set +x 00:35:35.447 02:05:35 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:35.447 [2024-04-24 02:05:35.471373] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:35.447 [2024-04-24 02:05:35.471650] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:35.447 [2024-04-24 02:05:35.471927] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:35.447 [2024-04-24 02:05:35.472192] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:35.447 [2024-04-24 02:05:35.472333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:35:35.447 02:05:35 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.447 02:05:35 -- bdev/bdev_raid.sh@671 -- # jq length 00:35:35.705 02:05:35 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:35:35.705 02:05:35 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:35:35.705 02:05:35 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@12 -- # local i 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:35.705 02:05:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:35.963 /dev/nbd0 00:35:35.963 02:05:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:35.963 02:05:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:35.963 02:05:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:35.963 02:05:36 -- common/autotest_common.sh@855 -- # local i 00:35:35.963 02:05:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:35.963 02:05:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:35.963 02:05:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:35.963 02:05:36 -- common/autotest_common.sh@859 -- # break 00:35:35.963 02:05:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:35.963 02:05:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:35.963 02:05:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:35.963 1+0 records in 00:35:35.963 1+0 records out 00:35:35.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450238 s, 9.1 MB/s 00:35:35.963 02:05:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:35.963 02:05:36 -- common/autotest_common.sh@872 -- # size=4096 00:35:35.963 02:05:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:35.963 02:05:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:35.963 02:05:36 -- common/autotest_common.sh@875 -- # return 0 00:35:35.963 02:05:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:35.963 02:05:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:35.963 02:05:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:36.532 /dev/nbd1 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:36.532 02:05:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:35:36.532 02:05:36 -- common/autotest_common.sh@855 -- # local i 00:35:36.532 02:05:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:36.532 02:05:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:36.532 02:05:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:35:36.532 02:05:36 -- common/autotest_common.sh@859 -- # break 00:35:36.532 02:05:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:36.532 02:05:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:36.532 02:05:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:36.532 1+0 records in 00:35:36.532 1+0 records out 00:35:36.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051168 s, 8.0 MB/s 00:35:36.532 02:05:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:36.532 02:05:36 -- common/autotest_common.sh@872 -- # size=4096 00:35:36.532 02:05:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:36.532 02:05:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:36.532 02:05:36 -- common/autotest_common.sh@875 -- # return 0 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:36.532 02:05:36 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:36.532 02:05:36 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@51 -- # local i 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:36.532 02:05:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@41 -- # break 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@45 -- # return 0 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:37.097 02:05:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@41 -- # break 00:35:37.353 02:05:37 -- bdev/nbd_common.sh@45 -- # return 0 00:35:37.353 02:05:37 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:35:37.353 02:05:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:35:37.353 02:05:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:35:37.353 02:05:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:35:37.610 02:05:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:37.867 [2024-04-24 02:05:37.807440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:37.867 [2024-04-24 02:05:37.807859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.867 [2024-04-24 02:05:37.808037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:37.867 [2024-04-24 02:05:37.808187] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.867 [2024-04-24 02:05:37.811445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.867 [2024-04-24 02:05:37.811692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:37.867 [2024-04-24 02:05:37.811986] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:37.867 [2024-04-24 02:05:37.812182] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:37.867 BaseBdev1 00:35:37.867 02:05:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:35:37.867 02:05:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:35:37.868 02:05:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:35:38.125 02:05:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:38.384 [2024-04-24 02:05:38.275061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:38.384 [2024-04-24 02:05:38.275408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:38.384 [2024-04-24 02:05:38.275568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:38.384 [2024-04-24 02:05:38.275692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:38.384 [2024-04-24 02:05:38.276373] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:38.384 [2024-04-24 02:05:38.276567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:38.384 [2024-04-24 02:05:38.276805] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:35:38.384 [2024-04-24 02:05:38.276920] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:35:38.384 [2024-04-24 02:05:38.277016] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:38.384 [2024-04-24 02:05:38.277078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:35:38.384 [2024-04-24 02:05:38.277311] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:38.384 BaseBdev2 00:35:38.384 02:05:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:35:38.384 02:05:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:35:38.384 02:05:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:35:38.641 02:05:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:38.898 [2024-04-24 02:05:38.755174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:38.898 [2024-04-24 02:05:38.755550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:38.898 [2024-04-24 02:05:38.755645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:38.898 [2024-04-24 02:05:38.755839] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:38.898 [2024-04-24 02:05:38.756453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:38.898 [2024-04-24 02:05:38.756657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:38.898 [2024-04-24 02:05:38.756911] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:35:38.898 [2024-04-24 02:05:38.757060] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:38.898 BaseBdev3 00:35:38.898 02:05:38 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:39.155 02:05:39 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:39.413 [2024-04-24 02:05:39.291403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:39.413 [2024-04-24 02:05:39.291931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:39.413 [2024-04-24 02:05:39.292193] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:39.413 [2024-04-24 02:05:39.292401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:39.413 [2024-04-24 02:05:39.293312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:39.413 [2024-04-24 02:05:39.293577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:39.413 [2024-04-24 02:05:39.293900] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:35:39.413 [2024-04-24 02:05:39.294079] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:39.413 spare 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.413 02:05:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.413 [2024-04-24 02:05:39.394374] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:35:39.413 [2024-04-24 02:05:39.394666] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:39.413 [2024-04-24 02:05:39.394907] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:35:39.413 [2024-04-24 02:05:39.403096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:35:39.413 [2024-04-24 02:05:39.403404] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:35:39.413 [2024-04-24 02:05:39.403769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:39.703 02:05:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:39.703 "name": "raid_bdev1", 00:35:39.703 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:39.703 "strip_size_kb": 64, 00:35:39.703 "state": "online", 00:35:39.703 "raid_level": "raid5f", 00:35:39.703 "superblock": true, 00:35:39.703 "num_base_bdevs": 3, 00:35:39.703 "num_base_bdevs_discovered": 3, 00:35:39.703 "num_base_bdevs_operational": 3, 00:35:39.703 "base_bdevs_list": [ 00:35:39.703 { 00:35:39.703 "name": "spare", 00:35:39.703 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:39.703 "is_configured": true, 00:35:39.703 "data_offset": 2048, 00:35:39.703 "data_size": 63488 00:35:39.703 }, 00:35:39.703 { 00:35:39.703 "name": "BaseBdev2", 00:35:39.703 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:39.703 "is_configured": true, 00:35:39.703 "data_offset": 2048, 00:35:39.703 "data_size": 63488 00:35:39.703 }, 00:35:39.703 { 00:35:39.703 "name": "BaseBdev3", 00:35:39.703 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:39.703 "is_configured": true, 00:35:39.703 "data_offset": 2048, 00:35:39.703 "data_size": 63488 00:35:39.703 } 00:35:39.703 ] 00:35:39.703 }' 00:35:39.703 02:05:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:39.703 02:05:39 -- common/autotest_common.sh@10 -- # set +x 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.267 02:05:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:35:40.524 "name": "raid_bdev1", 00:35:40.524 "uuid": "8c451b15-5332-4213-8153-00770bb88e8b", 00:35:40.524 "strip_size_kb": 64, 00:35:40.524 "state": "online", 00:35:40.524 "raid_level": "raid5f", 00:35:40.524 "superblock": true, 00:35:40.524 "num_base_bdevs": 3, 00:35:40.524 "num_base_bdevs_discovered": 3, 00:35:40.524 "num_base_bdevs_operational": 3, 00:35:40.524 "base_bdevs_list": [ 00:35:40.524 { 00:35:40.524 "name": "spare", 00:35:40.524 "uuid": "e171034c-fdab-5716-b532-860284107c71", 00:35:40.524 "is_configured": true, 00:35:40.524 "data_offset": 2048, 00:35:40.524 "data_size": 63488 00:35:40.524 }, 00:35:40.524 { 00:35:40.524 "name": "BaseBdev2", 00:35:40.524 "uuid": "a14ce146-135c-59cf-8e74-b41f95e83d4a", 00:35:40.524 "is_configured": true, 00:35:40.524 "data_offset": 2048, 00:35:40.524 "data_size": 63488 00:35:40.524 }, 00:35:40.524 { 00:35:40.524 "name": "BaseBdev3", 00:35:40.524 "uuid": "20a46adb-d795-5baa-9a89-03e79999ae79", 00:35:40.524 "is_configured": true, 00:35:40.524 "data_offset": 2048, 00:35:40.524 "data_size": 63488 00:35:40.524 } 00:35:40.524 ] 00:35:40.524 }' 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.524 02:05:40 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:40.783 02:05:40 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:35:40.783 02:05:40 -- bdev/bdev_raid.sh@709 -- # killprocess 137932 00:35:40.783 02:05:40 -- common/autotest_common.sh@936 -- # '[' -z 137932 ']' 00:35:40.783 02:05:40 -- common/autotest_common.sh@940 -- # kill -0 137932 00:35:40.783 02:05:40 -- common/autotest_common.sh@941 -- # uname 00:35:40.783 02:05:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:40.783 02:05:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137932 00:35:40.783 02:05:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:40.783 02:05:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:40.783 02:05:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137932' 00:35:40.783 killing process with pid 137932 00:35:40.783 02:05:40 -- common/autotest_common.sh@955 -- # kill 137932 00:35:40.783 Received shutdown signal, test time was about 60.000000 seconds 00:35:40.783 00:35:40.783 Latency(us) 00:35:40.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.783 =================================================================================================================== 00:35:40.783 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:40.783 02:05:40 -- common/autotest_common.sh@960 -- # wait 137932 00:35:40.783 [2024-04-24 02:05:40.793127] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:40.783 [2024-04-24 02:05:40.793228] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:40.783 [2024-04-24 02:05:40.793319] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:40.783 [2024-04-24 02:05:40.793337] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:35:41.350 [2024-04-24 02:05:41.263379] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:42.743 ************************************ 00:35:42.743 END TEST raid5f_rebuild_test_sb 00:35:42.743 ************************************ 00:35:42.743 02:05:42 -- bdev/bdev_raid.sh@711 -- # return 0 00:35:42.743 00:35:42.743 real 0m27.200s 00:35:42.743 user 0m41.742s 00:35:42.743 sys 0m3.798s 00:35:42.743 02:05:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:42.743 02:05:42 -- common/autotest_common.sh@10 -- # set +x 00:35:43.015 02:05:42 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:35:43.015 02:05:42 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:35:43.015 02:05:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:43.015 02:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:43.015 02:05:42 -- common/autotest_common.sh@10 -- # set +x 00:35:43.015 ************************************ 00:35:43.015 START TEST raid5f_state_function_test 00:35:43.015 ************************************ 00:35:43.016 02:05:42 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 false 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=138603 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138603' 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:43.016 Process raid pid: 138603 00:35:43.016 02:05:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138603 /var/tmp/spdk-raid.sock 00:35:43.016 02:05:42 -- common/autotest_common.sh@817 -- # '[' -z 138603 ']' 00:35:43.016 02:05:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:43.016 02:05:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:43.016 02:05:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:43.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:43.016 02:05:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:43.016 02:05:42 -- common/autotest_common.sh@10 -- # set +x 00:35:43.016 [2024-04-24 02:05:42.980220] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:35:43.016 [2024-04-24 02:05:42.980642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.277 [2024-04-24 02:05:43.142409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.534 [2024-04-24 02:05:43.389829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.792 [2024-04-24 02:05:43.664281] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:44.050 02:05:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:44.050 02:05:43 -- common/autotest_common.sh@850 -- # return 0 00:35:44.050 02:05:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:44.309 [2024-04-24 02:05:44.230562] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:44.309 [2024-04-24 02:05:44.230859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:44.309 [2024-04-24 02:05:44.230967] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:44.309 [2024-04-24 02:05:44.231071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:44.309 [2024-04-24 02:05:44.231152] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:44.309 [2024-04-24 02:05:44.231228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:44.309 [2024-04-24 02:05:44.231329] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:44.309 [2024-04-24 02:05:44.231389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.309 02:05:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:44.567 02:05:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:44.567 "name": "Existed_Raid", 00:35:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.567 "strip_size_kb": 64, 00:35:44.567 "state": "configuring", 00:35:44.567 "raid_level": "raid5f", 00:35:44.567 "superblock": false, 00:35:44.567 "num_base_bdevs": 4, 00:35:44.567 "num_base_bdevs_discovered": 0, 00:35:44.567 "num_base_bdevs_operational": 4, 00:35:44.567 "base_bdevs_list": [ 00:35:44.567 { 00:35:44.567 "name": "BaseBdev1", 00:35:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.567 "is_configured": false, 00:35:44.567 "data_offset": 0, 00:35:44.567 "data_size": 0 00:35:44.567 }, 00:35:44.567 { 00:35:44.567 "name": "BaseBdev2", 00:35:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.567 "is_configured": false, 00:35:44.567 "data_offset": 0, 00:35:44.567 "data_size": 0 00:35:44.567 }, 00:35:44.567 { 00:35:44.567 "name": "BaseBdev3", 00:35:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.567 "is_configured": false, 00:35:44.567 "data_offset": 0, 00:35:44.567 "data_size": 0 00:35:44.567 }, 00:35:44.567 { 00:35:44.567 "name": "BaseBdev4", 00:35:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.567 "is_configured": false, 00:35:44.567 "data_offset": 0, 00:35:44.567 "data_size": 0 00:35:44.567 } 00:35:44.567 ] 00:35:44.567 }' 00:35:44.567 02:05:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:44.567 02:05:44 -- common/autotest_common.sh@10 -- # set +x 00:35:45.211 02:05:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:45.211 [2024-04-24 02:05:45.258679] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:45.211 [2024-04-24 02:05:45.258923] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:35:45.211 02:05:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:45.486 [2024-04-24 02:05:45.522754] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:45.486 [2024-04-24 02:05:45.523049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:45.486 [2024-04-24 02:05:45.523148] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.486 [2024-04-24 02:05:45.523213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.486 [2024-04-24 02:05:45.523246] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:45.486 [2024-04-24 02:05:45.523391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:45.486 [2024-04-24 02:05:45.523431] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:45.486 [2024-04-24 02:05:45.523480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:45.486 02:05:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:45.742 [2024-04-24 02:05:45.796006] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:45.742 BaseBdev1 00:35:45.742 02:05:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:45.742 02:05:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:35:45.742 02:05:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:45.742 02:05:45 -- common/autotest_common.sh@887 -- # local i 00:35:45.742 02:05:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:45.742 02:05:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:45.742 02:05:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:46.000 02:05:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:46.257 [ 00:35:46.257 { 00:35:46.257 "name": "BaseBdev1", 00:35:46.257 "aliases": [ 00:35:46.257 "23295829-e278-403b-ad13-531ea6aae80e" 00:35:46.257 ], 00:35:46.257 "product_name": "Malloc disk", 00:35:46.257 "block_size": 512, 00:35:46.257 "num_blocks": 65536, 00:35:46.257 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:46.257 "assigned_rate_limits": { 00:35:46.257 "rw_ios_per_sec": 0, 00:35:46.257 "rw_mbytes_per_sec": 0, 00:35:46.257 "r_mbytes_per_sec": 0, 00:35:46.257 "w_mbytes_per_sec": 0 00:35:46.257 }, 00:35:46.257 "claimed": true, 00:35:46.257 "claim_type": "exclusive_write", 00:35:46.257 "zoned": false, 00:35:46.257 "supported_io_types": { 00:35:46.257 "read": true, 00:35:46.257 "write": true, 00:35:46.257 "unmap": true, 00:35:46.257 "write_zeroes": true, 00:35:46.257 "flush": true, 00:35:46.257 "reset": true, 00:35:46.257 "compare": false, 00:35:46.257 "compare_and_write": false, 00:35:46.257 "abort": true, 00:35:46.257 "nvme_admin": false, 00:35:46.257 "nvme_io": false 00:35:46.257 }, 00:35:46.257 "memory_domains": [ 00:35:46.257 { 00:35:46.257 "dma_device_id": "system", 00:35:46.257 "dma_device_type": 1 00:35:46.257 }, 00:35:46.257 { 00:35:46.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:46.257 "dma_device_type": 2 00:35:46.257 } 00:35:46.257 ], 00:35:46.257 "driver_specific": {} 00:35:46.257 } 00:35:46.257 ] 00:35:46.257 02:05:46 -- common/autotest_common.sh@893 -- # return 0 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.257 02:05:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.515 02:05:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:46.515 "name": "Existed_Raid", 00:35:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.515 "strip_size_kb": 64, 00:35:46.515 "state": "configuring", 00:35:46.515 "raid_level": "raid5f", 00:35:46.515 "superblock": false, 00:35:46.515 "num_base_bdevs": 4, 00:35:46.515 "num_base_bdevs_discovered": 1, 00:35:46.515 "num_base_bdevs_operational": 4, 00:35:46.515 "base_bdevs_list": [ 00:35:46.515 { 00:35:46.515 "name": "BaseBdev1", 00:35:46.515 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:46.515 "is_configured": true, 00:35:46.515 "data_offset": 0, 00:35:46.515 "data_size": 65536 00:35:46.515 }, 00:35:46.515 { 00:35:46.515 "name": "BaseBdev2", 00:35:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.515 "is_configured": false, 00:35:46.515 "data_offset": 0, 00:35:46.515 "data_size": 0 00:35:46.515 }, 00:35:46.515 { 00:35:46.515 "name": "BaseBdev3", 00:35:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.515 "is_configured": false, 00:35:46.515 "data_offset": 0, 00:35:46.515 "data_size": 0 00:35:46.515 }, 00:35:46.515 { 00:35:46.515 "name": "BaseBdev4", 00:35:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.515 "is_configured": false, 00:35:46.515 "data_offset": 0, 00:35:46.515 "data_size": 0 00:35:46.515 } 00:35:46.515 ] 00:35:46.515 }' 00:35:46.515 02:05:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:46.515 02:05:46 -- common/autotest_common.sh@10 -- # set +x 00:35:47.080 02:05:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:47.336 [2024-04-24 02:05:47.276477] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:47.336 [2024-04-24 02:05:47.276790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:35:47.336 02:05:47 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:35:47.336 02:05:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:47.594 [2024-04-24 02:05:47.532587] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:47.594 [2024-04-24 02:05:47.535100] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:47.594 [2024-04-24 02:05:47.535317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:47.594 [2024-04-24 02:05:47.535418] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:47.594 [2024-04-24 02:05:47.535481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:47.594 [2024-04-24 02:05:47.535557] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:47.594 [2024-04-24 02:05:47.535657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.594 02:05:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:47.851 02:05:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:47.851 "name": "Existed_Raid", 00:35:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.851 "strip_size_kb": 64, 00:35:47.851 "state": "configuring", 00:35:47.851 "raid_level": "raid5f", 00:35:47.851 "superblock": false, 00:35:47.851 "num_base_bdevs": 4, 00:35:47.851 "num_base_bdevs_discovered": 1, 00:35:47.851 "num_base_bdevs_operational": 4, 00:35:47.851 "base_bdevs_list": [ 00:35:47.851 { 00:35:47.851 "name": "BaseBdev1", 00:35:47.851 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:47.851 "is_configured": true, 00:35:47.851 "data_offset": 0, 00:35:47.851 "data_size": 65536 00:35:47.851 }, 00:35:47.851 { 00:35:47.851 "name": "BaseBdev2", 00:35:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.851 "is_configured": false, 00:35:47.851 "data_offset": 0, 00:35:47.851 "data_size": 0 00:35:47.851 }, 00:35:47.851 { 00:35:47.851 "name": "BaseBdev3", 00:35:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.851 "is_configured": false, 00:35:47.851 "data_offset": 0, 00:35:47.851 "data_size": 0 00:35:47.851 }, 00:35:47.851 { 00:35:47.851 "name": "BaseBdev4", 00:35:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.851 "is_configured": false, 00:35:47.851 "data_offset": 0, 00:35:47.851 "data_size": 0 00:35:47.851 } 00:35:47.851 ] 00:35:47.851 }' 00:35:47.851 02:05:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:47.851 02:05:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.415 02:05:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:48.673 [2024-04-24 02:05:48.696355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:48.673 BaseBdev2 00:35:48.673 02:05:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:48.673 02:05:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:35:48.673 02:05:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:48.673 02:05:48 -- common/autotest_common.sh@887 -- # local i 00:35:48.673 02:05:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:48.673 02:05:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:48.673 02:05:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:48.930 02:05:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:49.187 [ 00:35:49.187 { 00:35:49.187 "name": "BaseBdev2", 00:35:49.187 "aliases": [ 00:35:49.187 "3788855a-8463-409b-97b1-6d442f65d80d" 00:35:49.187 ], 00:35:49.187 "product_name": "Malloc disk", 00:35:49.187 "block_size": 512, 00:35:49.187 "num_blocks": 65536, 00:35:49.187 "uuid": "3788855a-8463-409b-97b1-6d442f65d80d", 00:35:49.187 "assigned_rate_limits": { 00:35:49.187 "rw_ios_per_sec": 0, 00:35:49.187 "rw_mbytes_per_sec": 0, 00:35:49.187 "r_mbytes_per_sec": 0, 00:35:49.187 "w_mbytes_per_sec": 0 00:35:49.187 }, 00:35:49.187 "claimed": true, 00:35:49.187 "claim_type": "exclusive_write", 00:35:49.187 "zoned": false, 00:35:49.187 "supported_io_types": { 00:35:49.187 "read": true, 00:35:49.187 "write": true, 00:35:49.187 "unmap": true, 00:35:49.187 "write_zeroes": true, 00:35:49.187 "flush": true, 00:35:49.187 "reset": true, 00:35:49.187 "compare": false, 00:35:49.187 "compare_and_write": false, 00:35:49.187 "abort": true, 00:35:49.187 "nvme_admin": false, 00:35:49.187 "nvme_io": false 00:35:49.187 }, 00:35:49.187 "memory_domains": [ 00:35:49.187 { 00:35:49.187 "dma_device_id": "system", 00:35:49.187 "dma_device_type": 1 00:35:49.187 }, 00:35:49.187 { 00:35:49.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.187 "dma_device_type": 2 00:35:49.187 } 00:35:49.187 ], 00:35:49.187 "driver_specific": {} 00:35:49.187 } 00:35:49.187 ] 00:35:49.188 02:05:49 -- common/autotest_common.sh@893 -- # return 0 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.188 02:05:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.498 02:05:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:49.498 "name": "Existed_Raid", 00:35:49.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.498 "strip_size_kb": 64, 00:35:49.498 "state": "configuring", 00:35:49.498 "raid_level": "raid5f", 00:35:49.498 "superblock": false, 00:35:49.498 "num_base_bdevs": 4, 00:35:49.498 "num_base_bdevs_discovered": 2, 00:35:49.498 "num_base_bdevs_operational": 4, 00:35:49.498 "base_bdevs_list": [ 00:35:49.498 { 00:35:49.498 "name": "BaseBdev1", 00:35:49.498 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:49.498 "is_configured": true, 00:35:49.498 "data_offset": 0, 00:35:49.498 "data_size": 65536 00:35:49.498 }, 00:35:49.498 { 00:35:49.498 "name": "BaseBdev2", 00:35:49.498 "uuid": "3788855a-8463-409b-97b1-6d442f65d80d", 00:35:49.498 "is_configured": true, 00:35:49.498 "data_offset": 0, 00:35:49.498 "data_size": 65536 00:35:49.498 }, 00:35:49.498 { 00:35:49.498 "name": "BaseBdev3", 00:35:49.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.498 "is_configured": false, 00:35:49.498 "data_offset": 0, 00:35:49.498 "data_size": 0 00:35:49.498 }, 00:35:49.498 { 00:35:49.498 "name": "BaseBdev4", 00:35:49.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.498 "is_configured": false, 00:35:49.498 "data_offset": 0, 00:35:49.498 "data_size": 0 00:35:49.498 } 00:35:49.498 ] 00:35:49.498 }' 00:35:49.498 02:05:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:49.498 02:05:49 -- common/autotest_common.sh@10 -- # set +x 00:35:50.117 02:05:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:50.388 [2024-04-24 02:05:50.317442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:50.388 BaseBdev3 00:35:50.388 02:05:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:35:50.388 02:05:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:35:50.388 02:05:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:50.388 02:05:50 -- common/autotest_common.sh@887 -- # local i 00:35:50.388 02:05:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:50.388 02:05:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:50.388 02:05:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:50.645 02:05:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:51.209 [ 00:35:51.209 { 00:35:51.209 "name": "BaseBdev3", 00:35:51.209 "aliases": [ 00:35:51.209 "4b045166-3512-4ede-b43f-4a086a027eed" 00:35:51.209 ], 00:35:51.209 "product_name": "Malloc disk", 00:35:51.209 "block_size": 512, 00:35:51.209 "num_blocks": 65536, 00:35:51.209 "uuid": "4b045166-3512-4ede-b43f-4a086a027eed", 00:35:51.209 "assigned_rate_limits": { 00:35:51.209 "rw_ios_per_sec": 0, 00:35:51.209 "rw_mbytes_per_sec": 0, 00:35:51.209 "r_mbytes_per_sec": 0, 00:35:51.209 "w_mbytes_per_sec": 0 00:35:51.209 }, 00:35:51.209 "claimed": true, 00:35:51.209 "claim_type": "exclusive_write", 00:35:51.209 "zoned": false, 00:35:51.209 "supported_io_types": { 00:35:51.209 "read": true, 00:35:51.209 "write": true, 00:35:51.209 "unmap": true, 00:35:51.209 "write_zeroes": true, 00:35:51.209 "flush": true, 00:35:51.209 "reset": true, 00:35:51.209 "compare": false, 00:35:51.209 "compare_and_write": false, 00:35:51.209 "abort": true, 00:35:51.209 "nvme_admin": false, 00:35:51.209 "nvme_io": false 00:35:51.209 }, 00:35:51.209 "memory_domains": [ 00:35:51.209 { 00:35:51.209 "dma_device_id": "system", 00:35:51.209 "dma_device_type": 1 00:35:51.209 }, 00:35:51.209 { 00:35:51.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:51.209 "dma_device_type": 2 00:35:51.209 } 00:35:51.209 ], 00:35:51.209 "driver_specific": {} 00:35:51.209 } 00:35:51.209 ] 00:35:51.209 02:05:51 -- common/autotest_common.sh@893 -- # return 0 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.209 02:05:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.466 02:05:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:51.466 "name": "Existed_Raid", 00:35:51.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.466 "strip_size_kb": 64, 00:35:51.466 "state": "configuring", 00:35:51.466 "raid_level": "raid5f", 00:35:51.466 "superblock": false, 00:35:51.466 "num_base_bdevs": 4, 00:35:51.466 "num_base_bdevs_discovered": 3, 00:35:51.466 "num_base_bdevs_operational": 4, 00:35:51.466 "base_bdevs_list": [ 00:35:51.466 { 00:35:51.466 "name": "BaseBdev1", 00:35:51.466 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:51.466 "is_configured": true, 00:35:51.466 "data_offset": 0, 00:35:51.466 "data_size": 65536 00:35:51.466 }, 00:35:51.466 { 00:35:51.466 "name": "BaseBdev2", 00:35:51.466 "uuid": "3788855a-8463-409b-97b1-6d442f65d80d", 00:35:51.466 "is_configured": true, 00:35:51.466 "data_offset": 0, 00:35:51.466 "data_size": 65536 00:35:51.466 }, 00:35:51.466 { 00:35:51.466 "name": "BaseBdev3", 00:35:51.466 "uuid": "4b045166-3512-4ede-b43f-4a086a027eed", 00:35:51.466 "is_configured": true, 00:35:51.466 "data_offset": 0, 00:35:51.466 "data_size": 65536 00:35:51.466 }, 00:35:51.466 { 00:35:51.466 "name": "BaseBdev4", 00:35:51.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.466 "is_configured": false, 00:35:51.466 "data_offset": 0, 00:35:51.466 "data_size": 0 00:35:51.466 } 00:35:51.466 ] 00:35:51.466 }' 00:35:51.466 02:05:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:51.466 02:05:51 -- common/autotest_common.sh@10 -- # set +x 00:35:52.031 02:05:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:52.289 [2024-04-24 02:05:52.296019] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:52.289 [2024-04-24 02:05:52.296487] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:35:52.289 [2024-04-24 02:05:52.296669] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:52.289 [2024-04-24 02:05:52.296981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:35:52.289 [2024-04-24 02:05:52.308872] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:35:52.289 [2024-04-24 02:05:52.309193] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:35:52.289 [2024-04-24 02:05:52.309826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:52.289 BaseBdev4 00:35:52.289 02:05:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:35:52.289 02:05:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:35:52.289 02:05:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:52.289 02:05:52 -- common/autotest_common.sh@887 -- # local i 00:35:52.289 02:05:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:52.289 02:05:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:52.289 02:05:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:52.853 02:05:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:53.111 [ 00:35:53.111 { 00:35:53.111 "name": "BaseBdev4", 00:35:53.111 "aliases": [ 00:35:53.111 "9d7a535a-d409-4aa3-a7fc-1340557fb68f" 00:35:53.111 ], 00:35:53.111 "product_name": "Malloc disk", 00:35:53.111 "block_size": 512, 00:35:53.111 "num_blocks": 65536, 00:35:53.111 "uuid": "9d7a535a-d409-4aa3-a7fc-1340557fb68f", 00:35:53.111 "assigned_rate_limits": { 00:35:53.111 "rw_ios_per_sec": 0, 00:35:53.111 "rw_mbytes_per_sec": 0, 00:35:53.111 "r_mbytes_per_sec": 0, 00:35:53.111 "w_mbytes_per_sec": 0 00:35:53.111 }, 00:35:53.111 "claimed": true, 00:35:53.111 "claim_type": "exclusive_write", 00:35:53.111 "zoned": false, 00:35:53.111 "supported_io_types": { 00:35:53.111 "read": true, 00:35:53.111 "write": true, 00:35:53.111 "unmap": true, 00:35:53.111 "write_zeroes": true, 00:35:53.111 "flush": true, 00:35:53.111 "reset": true, 00:35:53.111 "compare": false, 00:35:53.111 "compare_and_write": false, 00:35:53.111 "abort": true, 00:35:53.111 "nvme_admin": false, 00:35:53.111 "nvme_io": false 00:35:53.111 }, 00:35:53.111 "memory_domains": [ 00:35:53.111 { 00:35:53.111 "dma_device_id": "system", 00:35:53.111 "dma_device_type": 1 00:35:53.111 }, 00:35:53.111 { 00:35:53.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.111 "dma_device_type": 2 00:35:53.111 } 00:35:53.111 ], 00:35:53.111 "driver_specific": {} 00:35:53.111 } 00:35:53.111 ] 00:35:53.111 02:05:53 -- common/autotest_common.sh@893 -- # return 0 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.111 02:05:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:53.368 02:05:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:53.368 "name": "Existed_Raid", 00:35:53.368 "uuid": "c7dcadce-d47d-4765-a607-2f6f8ec88188", 00:35:53.368 "strip_size_kb": 64, 00:35:53.368 "state": "online", 00:35:53.368 "raid_level": "raid5f", 00:35:53.368 "superblock": false, 00:35:53.368 "num_base_bdevs": 4, 00:35:53.368 "num_base_bdevs_discovered": 4, 00:35:53.368 "num_base_bdevs_operational": 4, 00:35:53.368 "base_bdevs_list": [ 00:35:53.368 { 00:35:53.368 "name": "BaseBdev1", 00:35:53.368 "uuid": "23295829-e278-403b-ad13-531ea6aae80e", 00:35:53.368 "is_configured": true, 00:35:53.368 "data_offset": 0, 00:35:53.368 "data_size": 65536 00:35:53.368 }, 00:35:53.368 { 00:35:53.368 "name": "BaseBdev2", 00:35:53.368 "uuid": "3788855a-8463-409b-97b1-6d442f65d80d", 00:35:53.368 "is_configured": true, 00:35:53.368 "data_offset": 0, 00:35:53.368 "data_size": 65536 00:35:53.368 }, 00:35:53.368 { 00:35:53.368 "name": "BaseBdev3", 00:35:53.368 "uuid": "4b045166-3512-4ede-b43f-4a086a027eed", 00:35:53.368 "is_configured": true, 00:35:53.368 "data_offset": 0, 00:35:53.368 "data_size": 65536 00:35:53.368 }, 00:35:53.368 { 00:35:53.368 "name": "BaseBdev4", 00:35:53.369 "uuid": "9d7a535a-d409-4aa3-a7fc-1340557fb68f", 00:35:53.369 "is_configured": true, 00:35:53.369 "data_offset": 0, 00:35:53.369 "data_size": 65536 00:35:53.369 } 00:35:53.369 ] 00:35:53.369 }' 00:35:53.369 02:05:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:53.369 02:05:53 -- common/autotest_common.sh@10 -- # set +x 00:35:53.933 02:05:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:54.190 [2024-04-24 02:05:54.250292] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.448 02:05:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:54.704 02:05:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:54.704 "name": "Existed_Raid", 00:35:54.704 "uuid": "c7dcadce-d47d-4765-a607-2f6f8ec88188", 00:35:54.704 "strip_size_kb": 64, 00:35:54.704 "state": "online", 00:35:54.704 "raid_level": "raid5f", 00:35:54.705 "superblock": false, 00:35:54.705 "num_base_bdevs": 4, 00:35:54.705 "num_base_bdevs_discovered": 3, 00:35:54.705 "num_base_bdevs_operational": 3, 00:35:54.705 "base_bdevs_list": [ 00:35:54.705 { 00:35:54.705 "name": null, 00:35:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:54.705 "is_configured": false, 00:35:54.705 "data_offset": 0, 00:35:54.705 "data_size": 65536 00:35:54.705 }, 00:35:54.705 { 00:35:54.705 "name": "BaseBdev2", 00:35:54.705 "uuid": "3788855a-8463-409b-97b1-6d442f65d80d", 00:35:54.705 "is_configured": true, 00:35:54.705 "data_offset": 0, 00:35:54.705 "data_size": 65536 00:35:54.705 }, 00:35:54.705 { 00:35:54.705 "name": "BaseBdev3", 00:35:54.705 "uuid": "4b045166-3512-4ede-b43f-4a086a027eed", 00:35:54.705 "is_configured": true, 00:35:54.705 "data_offset": 0, 00:35:54.705 "data_size": 65536 00:35:54.705 }, 00:35:54.705 { 00:35:54.705 "name": "BaseBdev4", 00:35:54.705 "uuid": "9d7a535a-d409-4aa3-a7fc-1340557fb68f", 00:35:54.705 "is_configured": true, 00:35:54.705 "data_offset": 0, 00:35:54.705 "data_size": 65536 00:35:54.705 } 00:35:54.705 ] 00:35:54.705 }' 00:35:54.705 02:05:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:54.705 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:55.637 02:05:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:55.895 [2024-04-24 02:05:55.922651] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:55.895 [2024-04-24 02:05:55.922965] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:56.152 [2024-04-24 02:05:56.037388] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:56.152 02:05:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:56.152 02:05:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:56.152 02:05:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.152 02:05:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:56.437 02:05:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:56.437 02:05:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:56.437 02:05:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:56.694 [2024-04-24 02:05:56.584585] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:56.694 02:05:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:56.694 02:05:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:56.694 02:05:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.694 02:05:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:57.259 02:05:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:57.259 02:05:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:57.259 02:05:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:35:57.259 [2024-04-24 02:05:57.260313] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:57.259 [2024-04-24 02:05:57.260681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:35:57.516 02:05:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:57.516 02:05:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:57.516 02:05:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.516 02:05:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:57.773 02:05:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:57.773 02:05:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:57.773 02:05:57 -- bdev/bdev_raid.sh@287 -- # killprocess 138603 00:35:57.773 02:05:57 -- common/autotest_common.sh@936 -- # '[' -z 138603 ']' 00:35:57.773 02:05:57 -- common/autotest_common.sh@940 -- # kill -0 138603 00:35:57.773 02:05:57 -- common/autotest_common.sh@941 -- # uname 00:35:57.773 02:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:57.773 02:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138603 00:35:57.773 killing process with pid 138603 00:35:57.773 02:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:57.773 02:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:57.773 02:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138603' 00:35:57.773 02:05:57 -- common/autotest_common.sh@955 -- # kill 138603 00:35:57.773 02:05:57 -- common/autotest_common.sh@960 -- # wait 138603 00:35:57.773 [2024-04-24 02:05:57.696581] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:57.773 [2024-04-24 02:05:57.697004] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:59.670 ************************************ 00:35:59.670 END TEST raid5f_state_function_test 00:35:59.670 ************************************ 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:59.670 00:35:59.670 real 0m16.327s 00:35:59.670 user 0m28.348s 00:35:59.670 sys 0m2.072s 00:35:59.670 02:05:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:59.670 02:05:59 -- common/autotest_common.sh@10 -- # set +x 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:35:59.670 02:05:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:59.670 02:05:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:59.670 02:05:59 -- common/autotest_common.sh@10 -- # set +x 00:35:59.670 ************************************ 00:35:59.670 START TEST raid5f_state_function_test_sb 00:35:59.670 ************************************ 00:35:59.670 02:05:59 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 true 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=139058 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139058' 00:35:59.670 Process raid pid: 139058 00:35:59.670 02:05:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139058 /var/tmp/spdk-raid.sock 00:35:59.670 02:05:59 -- common/autotest_common.sh@817 -- # '[' -z 139058 ']' 00:35:59.670 02:05:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:59.670 02:05:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:59.670 02:05:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:59.670 02:05:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:59.670 02:05:59 -- common/autotest_common.sh@10 -- # set +x 00:35:59.670 [2024-04-24 02:05:59.428906] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:35:59.670 [2024-04-24 02:05:59.429246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.670 [2024-04-24 02:05:59.623297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.927 [2024-04-24 02:05:59.952642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.185 [2024-04-24 02:06:00.251090] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:00.442 02:06:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:00.442 02:06:00 -- common/autotest_common.sh@850 -- # return 0 00:36:00.442 02:06:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:00.700 [2024-04-24 02:06:00.699571] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:00.700 [2024-04-24 02:06:00.699864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:00.700 [2024-04-24 02:06:00.699977] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:00.700 [2024-04-24 02:06:00.700042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:00.700 [2024-04-24 02:06:00.700244] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:00.700 [2024-04-24 02:06:00.700445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:00.700 [2024-04-24 02:06:00.700531] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:00.700 [2024-04-24 02:06:00.700603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.700 02:06:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.957 02:06:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:00.958 "name": "Existed_Raid", 00:36:00.958 "uuid": "2c17f87c-b954-46eb-b182-363eb0e5b1c2", 00:36:00.958 "strip_size_kb": 64, 00:36:00.958 "state": "configuring", 00:36:00.958 "raid_level": "raid5f", 00:36:00.958 "superblock": true, 00:36:00.958 "num_base_bdevs": 4, 00:36:00.958 "num_base_bdevs_discovered": 0, 00:36:00.958 "num_base_bdevs_operational": 4, 00:36:00.958 "base_bdevs_list": [ 00:36:00.958 { 00:36:00.958 "name": "BaseBdev1", 00:36:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.958 "is_configured": false, 00:36:00.958 "data_offset": 0, 00:36:00.958 "data_size": 0 00:36:00.958 }, 00:36:00.958 { 00:36:00.958 "name": "BaseBdev2", 00:36:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.958 "is_configured": false, 00:36:00.958 "data_offset": 0, 00:36:00.958 "data_size": 0 00:36:00.958 }, 00:36:00.958 { 00:36:00.958 "name": "BaseBdev3", 00:36:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.958 "is_configured": false, 00:36:00.958 "data_offset": 0, 00:36:00.958 "data_size": 0 00:36:00.958 }, 00:36:00.958 { 00:36:00.958 "name": "BaseBdev4", 00:36:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.958 "is_configured": false, 00:36:00.958 "data_offset": 0, 00:36:00.958 "data_size": 0 00:36:00.958 } 00:36:00.958 ] 00:36:00.958 }' 00:36:00.958 02:06:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:00.958 02:06:00 -- common/autotest_common.sh@10 -- # set +x 00:36:01.908 02:06:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:01.908 [2024-04-24 02:06:01.887654] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:01.908 [2024-04-24 02:06:01.887911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:36:01.908 02:06:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:02.280 [2024-04-24 02:06:02.155765] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:02.280 [2024-04-24 02:06:02.157223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:02.280 [2024-04-24 02:06:02.157354] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:02.280 [2024-04-24 02:06:02.157421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:02.280 [2024-04-24 02:06:02.157499] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:02.280 [2024-04-24 02:06:02.157629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:02.280 [2024-04-24 02:06:02.157708] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:02.280 [2024-04-24 02:06:02.157767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:02.281 02:06:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:02.539 [2024-04-24 02:06:02.462924] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:02.539 BaseBdev1 00:36:02.539 02:06:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:36:02.539 02:06:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:36:02.539 02:06:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:02.539 02:06:02 -- common/autotest_common.sh@887 -- # local i 00:36:02.539 02:06:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:02.539 02:06:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:02.539 02:06:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:02.797 02:06:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:03.054 [ 00:36:03.054 { 00:36:03.054 "name": "BaseBdev1", 00:36:03.054 "aliases": [ 00:36:03.054 "56b02e95-db85-4e5d-ac7b-c7afc8d6dc7c" 00:36:03.054 ], 00:36:03.054 "product_name": "Malloc disk", 00:36:03.054 "block_size": 512, 00:36:03.054 "num_blocks": 65536, 00:36:03.054 "uuid": "56b02e95-db85-4e5d-ac7b-c7afc8d6dc7c", 00:36:03.054 "assigned_rate_limits": { 00:36:03.054 "rw_ios_per_sec": 0, 00:36:03.054 "rw_mbytes_per_sec": 0, 00:36:03.054 "r_mbytes_per_sec": 0, 00:36:03.054 "w_mbytes_per_sec": 0 00:36:03.054 }, 00:36:03.054 "claimed": true, 00:36:03.054 "claim_type": "exclusive_write", 00:36:03.054 "zoned": false, 00:36:03.054 "supported_io_types": { 00:36:03.054 "read": true, 00:36:03.054 "write": true, 00:36:03.054 "unmap": true, 00:36:03.054 "write_zeroes": true, 00:36:03.054 "flush": true, 00:36:03.054 "reset": true, 00:36:03.054 "compare": false, 00:36:03.054 "compare_and_write": false, 00:36:03.054 "abort": true, 00:36:03.054 "nvme_admin": false, 00:36:03.054 "nvme_io": false 00:36:03.054 }, 00:36:03.054 "memory_domains": [ 00:36:03.054 { 00:36:03.054 "dma_device_id": "system", 00:36:03.054 "dma_device_type": 1 00:36:03.054 }, 00:36:03.054 { 00:36:03.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:03.054 "dma_device_type": 2 00:36:03.054 } 00:36:03.054 ], 00:36:03.054 "driver_specific": {} 00:36:03.054 } 00:36:03.054 ] 00:36:03.054 02:06:03 -- common/autotest_common.sh@893 -- # return 0 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:03.054 02:06:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:03.055 02:06:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.312 02:06:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:03.570 02:06:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:03.570 "name": "Existed_Raid", 00:36:03.570 "uuid": "9ce7f0d2-5097-4e8e-9a41-a25fc8e0dcf7", 00:36:03.570 "strip_size_kb": 64, 00:36:03.570 "state": "configuring", 00:36:03.570 "raid_level": "raid5f", 00:36:03.570 "superblock": true, 00:36:03.570 "num_base_bdevs": 4, 00:36:03.570 "num_base_bdevs_discovered": 1, 00:36:03.570 "num_base_bdevs_operational": 4, 00:36:03.570 "base_bdevs_list": [ 00:36:03.570 { 00:36:03.570 "name": "BaseBdev1", 00:36:03.570 "uuid": "56b02e95-db85-4e5d-ac7b-c7afc8d6dc7c", 00:36:03.570 "is_configured": true, 00:36:03.570 "data_offset": 2048, 00:36:03.570 "data_size": 63488 00:36:03.570 }, 00:36:03.570 { 00:36:03.570 "name": "BaseBdev2", 00:36:03.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.570 "is_configured": false, 00:36:03.570 "data_offset": 0, 00:36:03.570 "data_size": 0 00:36:03.570 }, 00:36:03.570 { 00:36:03.570 "name": "BaseBdev3", 00:36:03.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.570 "is_configured": false, 00:36:03.570 "data_offset": 0, 00:36:03.570 "data_size": 0 00:36:03.570 }, 00:36:03.570 { 00:36:03.570 "name": "BaseBdev4", 00:36:03.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.570 "is_configured": false, 00:36:03.570 "data_offset": 0, 00:36:03.570 "data_size": 0 00:36:03.570 } 00:36:03.570 ] 00:36:03.570 }' 00:36:03.570 02:06:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:03.570 02:06:03 -- common/autotest_common.sh@10 -- # set +x 00:36:04.134 02:06:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:04.392 [2024-04-24 02:06:04.299410] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:04.392 [2024-04-24 02:06:04.299676] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:36:04.392 02:06:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:36:04.392 02:06:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:04.649 02:06:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:04.907 BaseBdev1 00:36:04.907 02:06:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:36:04.907 02:06:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:36:04.907 02:06:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:04.907 02:06:04 -- common/autotest_common.sh@887 -- # local i 00:36:04.907 02:06:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:04.907 02:06:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:04.907 02:06:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:05.164 02:06:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:05.421 [ 00:36:05.421 { 00:36:05.421 "name": "BaseBdev1", 00:36:05.421 "aliases": [ 00:36:05.421 "e4c04ad5-c856-445e-a278-7a6918386ccc" 00:36:05.421 ], 00:36:05.421 "product_name": "Malloc disk", 00:36:05.421 "block_size": 512, 00:36:05.421 "num_blocks": 65536, 00:36:05.421 "uuid": "e4c04ad5-c856-445e-a278-7a6918386ccc", 00:36:05.421 "assigned_rate_limits": { 00:36:05.421 "rw_ios_per_sec": 0, 00:36:05.421 "rw_mbytes_per_sec": 0, 00:36:05.421 "r_mbytes_per_sec": 0, 00:36:05.421 "w_mbytes_per_sec": 0 00:36:05.421 }, 00:36:05.421 "claimed": false, 00:36:05.421 "zoned": false, 00:36:05.421 "supported_io_types": { 00:36:05.421 "read": true, 00:36:05.421 "write": true, 00:36:05.421 "unmap": true, 00:36:05.421 "write_zeroes": true, 00:36:05.421 "flush": true, 00:36:05.421 "reset": true, 00:36:05.421 "compare": false, 00:36:05.421 "compare_and_write": false, 00:36:05.421 "abort": true, 00:36:05.421 "nvme_admin": false, 00:36:05.421 "nvme_io": false 00:36:05.421 }, 00:36:05.421 "memory_domains": [ 00:36:05.421 { 00:36:05.421 "dma_device_id": "system", 00:36:05.421 "dma_device_type": 1 00:36:05.421 }, 00:36:05.421 { 00:36:05.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:05.421 "dma_device_type": 2 00:36:05.421 } 00:36:05.421 ], 00:36:05.421 "driver_specific": {} 00:36:05.421 } 00:36:05.421 ] 00:36:05.421 02:06:05 -- common/autotest_common.sh@893 -- # return 0 00:36:05.421 02:06:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:05.682 [2024-04-24 02:06:05.739698] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:05.682 [2024-04-24 02:06:05.742210] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:05.682 [2024-04-24 02:06:05.742445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:05.682 [2024-04-24 02:06:05.742552] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:05.682 [2024-04-24 02:06:05.742617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:05.682 [2024-04-24 02:06:05.742811] bdev.c:8073:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:05.682 [2024-04-24 02:06:05.742869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:05.682 02:06:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:05.939 02:06:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.939 02:06:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.939 02:06:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:05.939 "name": "Existed_Raid", 00:36:05.939 "uuid": "e728f5eb-d996-42ee-b267-cc34986a85be", 00:36:05.939 "strip_size_kb": 64, 00:36:05.939 "state": "configuring", 00:36:05.939 "raid_level": "raid5f", 00:36:05.939 "superblock": true, 00:36:05.939 "num_base_bdevs": 4, 00:36:05.939 "num_base_bdevs_discovered": 1, 00:36:05.939 "num_base_bdevs_operational": 4, 00:36:05.939 "base_bdevs_list": [ 00:36:05.939 { 00:36:05.939 "name": "BaseBdev1", 00:36:05.939 "uuid": "e4c04ad5-c856-445e-a278-7a6918386ccc", 00:36:05.939 "is_configured": true, 00:36:05.939 "data_offset": 2048, 00:36:05.939 "data_size": 63488 00:36:05.939 }, 00:36:05.939 { 00:36:05.939 "name": "BaseBdev2", 00:36:05.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.939 "is_configured": false, 00:36:05.939 "data_offset": 0, 00:36:05.939 "data_size": 0 00:36:05.939 }, 00:36:05.939 { 00:36:05.939 "name": "BaseBdev3", 00:36:05.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.939 "is_configured": false, 00:36:05.939 "data_offset": 0, 00:36:05.939 "data_size": 0 00:36:05.940 }, 00:36:05.940 { 00:36:05.940 "name": "BaseBdev4", 00:36:05.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.940 "is_configured": false, 00:36:05.940 "data_offset": 0, 00:36:05.940 "data_size": 0 00:36:05.940 } 00:36:05.940 ] 00:36:05.940 }' 00:36:05.940 02:06:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:05.940 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:36:06.503 02:06:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:07.068 [2024-04-24 02:06:06.920448] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:07.068 BaseBdev2 00:36:07.068 02:06:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:36:07.068 02:06:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:36:07.068 02:06:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:07.068 02:06:06 -- common/autotest_common.sh@887 -- # local i 00:36:07.068 02:06:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:07.068 02:06:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:07.068 02:06:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:07.325 02:06:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:07.583 [ 00:36:07.583 { 00:36:07.583 "name": "BaseBdev2", 00:36:07.583 "aliases": [ 00:36:07.583 "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef" 00:36:07.583 ], 00:36:07.583 "product_name": "Malloc disk", 00:36:07.583 "block_size": 512, 00:36:07.583 "num_blocks": 65536, 00:36:07.583 "uuid": "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef", 00:36:07.583 "assigned_rate_limits": { 00:36:07.583 "rw_ios_per_sec": 0, 00:36:07.583 "rw_mbytes_per_sec": 0, 00:36:07.583 "r_mbytes_per_sec": 0, 00:36:07.583 "w_mbytes_per_sec": 0 00:36:07.583 }, 00:36:07.583 "claimed": true, 00:36:07.583 "claim_type": "exclusive_write", 00:36:07.583 "zoned": false, 00:36:07.583 "supported_io_types": { 00:36:07.583 "read": true, 00:36:07.583 "write": true, 00:36:07.583 "unmap": true, 00:36:07.583 "write_zeroes": true, 00:36:07.583 "flush": true, 00:36:07.583 "reset": true, 00:36:07.583 "compare": false, 00:36:07.583 "compare_and_write": false, 00:36:07.583 "abort": true, 00:36:07.583 "nvme_admin": false, 00:36:07.583 "nvme_io": false 00:36:07.583 }, 00:36:07.583 "memory_domains": [ 00:36:07.583 { 00:36:07.583 "dma_device_id": "system", 00:36:07.583 "dma_device_type": 1 00:36:07.583 }, 00:36:07.583 { 00:36:07.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:07.583 "dma_device_type": 2 00:36:07.583 } 00:36:07.583 ], 00:36:07.583 "driver_specific": {} 00:36:07.583 } 00:36:07.583 ] 00:36:07.583 02:06:07 -- common/autotest_common.sh@893 -- # return 0 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:07.583 02:06:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:07.841 02:06:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:07.841 "name": "Existed_Raid", 00:36:07.841 "uuid": "e728f5eb-d996-42ee-b267-cc34986a85be", 00:36:07.841 "strip_size_kb": 64, 00:36:07.841 "state": "configuring", 00:36:07.841 "raid_level": "raid5f", 00:36:07.841 "superblock": true, 00:36:07.841 "num_base_bdevs": 4, 00:36:07.841 "num_base_bdevs_discovered": 2, 00:36:07.841 "num_base_bdevs_operational": 4, 00:36:07.841 "base_bdevs_list": [ 00:36:07.841 { 00:36:07.841 "name": "BaseBdev1", 00:36:07.841 "uuid": "e4c04ad5-c856-445e-a278-7a6918386ccc", 00:36:07.841 "is_configured": true, 00:36:07.841 "data_offset": 2048, 00:36:07.841 "data_size": 63488 00:36:07.841 }, 00:36:07.841 { 00:36:07.841 "name": "BaseBdev2", 00:36:07.841 "uuid": "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef", 00:36:07.841 "is_configured": true, 00:36:07.841 "data_offset": 2048, 00:36:07.841 "data_size": 63488 00:36:07.841 }, 00:36:07.841 { 00:36:07.841 "name": "BaseBdev3", 00:36:07.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.841 "is_configured": false, 00:36:07.841 "data_offset": 0, 00:36:07.841 "data_size": 0 00:36:07.841 }, 00:36:07.841 { 00:36:07.841 "name": "BaseBdev4", 00:36:07.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.841 "is_configured": false, 00:36:07.841 "data_offset": 0, 00:36:07.841 "data_size": 0 00:36:07.841 } 00:36:07.841 ] 00:36:07.841 }' 00:36:07.841 02:06:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:07.841 02:06:07 -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 02:06:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:08.663 [2024-04-24 02:06:08.680880] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:08.663 BaseBdev3 00:36:08.663 02:06:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:36:08.663 02:06:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:36:08.663 02:06:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:08.663 02:06:08 -- common/autotest_common.sh@887 -- # local i 00:36:08.663 02:06:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:08.663 02:06:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:08.663 02:06:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:08.920 02:06:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:09.177 [ 00:36:09.177 { 00:36:09.177 "name": "BaseBdev3", 00:36:09.177 "aliases": [ 00:36:09.177 "5415102f-2b60-4f5d-896d-9f7b8fdf137d" 00:36:09.177 ], 00:36:09.177 "product_name": "Malloc disk", 00:36:09.177 "block_size": 512, 00:36:09.177 "num_blocks": 65536, 00:36:09.177 "uuid": "5415102f-2b60-4f5d-896d-9f7b8fdf137d", 00:36:09.177 "assigned_rate_limits": { 00:36:09.177 "rw_ios_per_sec": 0, 00:36:09.177 "rw_mbytes_per_sec": 0, 00:36:09.177 "r_mbytes_per_sec": 0, 00:36:09.177 "w_mbytes_per_sec": 0 00:36:09.177 }, 00:36:09.177 "claimed": true, 00:36:09.177 "claim_type": "exclusive_write", 00:36:09.177 "zoned": false, 00:36:09.177 "supported_io_types": { 00:36:09.177 "read": true, 00:36:09.177 "write": true, 00:36:09.177 "unmap": true, 00:36:09.177 "write_zeroes": true, 00:36:09.177 "flush": true, 00:36:09.177 "reset": true, 00:36:09.177 "compare": false, 00:36:09.177 "compare_and_write": false, 00:36:09.177 "abort": true, 00:36:09.177 "nvme_admin": false, 00:36:09.177 "nvme_io": false 00:36:09.177 }, 00:36:09.177 "memory_domains": [ 00:36:09.177 { 00:36:09.177 "dma_device_id": "system", 00:36:09.177 "dma_device_type": 1 00:36:09.177 }, 00:36:09.177 { 00:36:09.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:09.177 "dma_device_type": 2 00:36:09.177 } 00:36:09.177 ], 00:36:09.177 "driver_specific": {} 00:36:09.177 } 00:36:09.177 ] 00:36:09.177 02:06:09 -- common/autotest_common.sh@893 -- # return 0 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.177 02:06:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:09.433 02:06:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:09.433 "name": "Existed_Raid", 00:36:09.433 "uuid": "e728f5eb-d996-42ee-b267-cc34986a85be", 00:36:09.433 "strip_size_kb": 64, 00:36:09.433 "state": "configuring", 00:36:09.433 "raid_level": "raid5f", 00:36:09.433 "superblock": true, 00:36:09.433 "num_base_bdevs": 4, 00:36:09.433 "num_base_bdevs_discovered": 3, 00:36:09.433 "num_base_bdevs_operational": 4, 00:36:09.433 "base_bdevs_list": [ 00:36:09.433 { 00:36:09.433 "name": "BaseBdev1", 00:36:09.433 "uuid": "e4c04ad5-c856-445e-a278-7a6918386ccc", 00:36:09.433 "is_configured": true, 00:36:09.433 "data_offset": 2048, 00:36:09.433 "data_size": 63488 00:36:09.433 }, 00:36:09.433 { 00:36:09.433 "name": "BaseBdev2", 00:36:09.433 "uuid": "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef", 00:36:09.433 "is_configured": true, 00:36:09.433 "data_offset": 2048, 00:36:09.433 "data_size": 63488 00:36:09.433 }, 00:36:09.433 { 00:36:09.433 "name": "BaseBdev3", 00:36:09.434 "uuid": "5415102f-2b60-4f5d-896d-9f7b8fdf137d", 00:36:09.434 "is_configured": true, 00:36:09.434 "data_offset": 2048, 00:36:09.434 "data_size": 63488 00:36:09.434 }, 00:36:09.434 { 00:36:09.434 "name": "BaseBdev4", 00:36:09.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.434 "is_configured": false, 00:36:09.434 "data_offset": 0, 00:36:09.434 "data_size": 0 00:36:09.434 } 00:36:09.434 ] 00:36:09.434 }' 00:36:09.434 02:06:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:09.434 02:06:09 -- common/autotest_common.sh@10 -- # set +x 00:36:09.997 02:06:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:10.255 [2024-04-24 02:06:10.201220] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:10.255 [2024-04-24 02:06:10.201663] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:36:10.255 [2024-04-24 02:06:10.201821] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:10.255 [2024-04-24 02:06:10.202013] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:36:10.255 BaseBdev4 00:36:10.255 [2024-04-24 02:06:10.212292] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:36:10.255 [2024-04-24 02:06:10.212447] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:36:10.255 [2024-04-24 02:06:10.212808] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.255 02:06:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:36:10.255 02:06:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:36:10.255 02:06:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:10.255 02:06:10 -- common/autotest_common.sh@887 -- # local i 00:36:10.255 02:06:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:10.255 02:06:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:10.255 02:06:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:10.511 02:06:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:10.773 [ 00:36:10.773 { 00:36:10.773 "name": "BaseBdev4", 00:36:10.773 "aliases": [ 00:36:10.773 "36a5fab0-e3b1-406b-b340-0b52a9c25c4f" 00:36:10.773 ], 00:36:10.773 "product_name": "Malloc disk", 00:36:10.773 "block_size": 512, 00:36:10.773 "num_blocks": 65536, 00:36:10.773 "uuid": "36a5fab0-e3b1-406b-b340-0b52a9c25c4f", 00:36:10.773 "assigned_rate_limits": { 00:36:10.773 "rw_ios_per_sec": 0, 00:36:10.773 "rw_mbytes_per_sec": 0, 00:36:10.773 "r_mbytes_per_sec": 0, 00:36:10.773 "w_mbytes_per_sec": 0 00:36:10.773 }, 00:36:10.773 "claimed": true, 00:36:10.773 "claim_type": "exclusive_write", 00:36:10.773 "zoned": false, 00:36:10.773 "supported_io_types": { 00:36:10.773 "read": true, 00:36:10.773 "write": true, 00:36:10.773 "unmap": true, 00:36:10.773 "write_zeroes": true, 00:36:10.773 "flush": true, 00:36:10.773 "reset": true, 00:36:10.773 "compare": false, 00:36:10.773 "compare_and_write": false, 00:36:10.773 "abort": true, 00:36:10.773 "nvme_admin": false, 00:36:10.773 "nvme_io": false 00:36:10.773 }, 00:36:10.773 "memory_domains": [ 00:36:10.773 { 00:36:10.773 "dma_device_id": "system", 00:36:10.773 "dma_device_type": 1 00:36:10.773 }, 00:36:10.773 { 00:36:10.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:10.773 "dma_device_type": 2 00:36:10.773 } 00:36:10.773 ], 00:36:10.773 "driver_specific": {} 00:36:10.773 } 00:36:10.773 ] 00:36:10.773 02:06:10 -- common/autotest_common.sh@893 -- # return 0 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:10.773 02:06:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.774 02:06:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:11.046 02:06:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:11.046 "name": "Existed_Raid", 00:36:11.046 "uuid": "e728f5eb-d996-42ee-b267-cc34986a85be", 00:36:11.046 "strip_size_kb": 64, 00:36:11.046 "state": "online", 00:36:11.046 "raid_level": "raid5f", 00:36:11.046 "superblock": true, 00:36:11.046 "num_base_bdevs": 4, 00:36:11.046 "num_base_bdevs_discovered": 4, 00:36:11.046 "num_base_bdevs_operational": 4, 00:36:11.046 "base_bdevs_list": [ 00:36:11.046 { 00:36:11.046 "name": "BaseBdev1", 00:36:11.046 "uuid": "e4c04ad5-c856-445e-a278-7a6918386ccc", 00:36:11.046 "is_configured": true, 00:36:11.046 "data_offset": 2048, 00:36:11.046 "data_size": 63488 00:36:11.046 }, 00:36:11.046 { 00:36:11.046 "name": "BaseBdev2", 00:36:11.046 "uuid": "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef", 00:36:11.046 "is_configured": true, 00:36:11.046 "data_offset": 2048, 00:36:11.046 "data_size": 63488 00:36:11.046 }, 00:36:11.046 { 00:36:11.046 "name": "BaseBdev3", 00:36:11.046 "uuid": "5415102f-2b60-4f5d-896d-9f7b8fdf137d", 00:36:11.046 "is_configured": true, 00:36:11.046 "data_offset": 2048, 00:36:11.046 "data_size": 63488 00:36:11.046 }, 00:36:11.046 { 00:36:11.046 "name": "BaseBdev4", 00:36:11.046 "uuid": "36a5fab0-e3b1-406b-b340-0b52a9c25c4f", 00:36:11.047 "is_configured": true, 00:36:11.047 "data_offset": 2048, 00:36:11.047 "data_size": 63488 00:36:11.047 } 00:36:11.047 ] 00:36:11.047 }' 00:36:11.047 02:06:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:11.047 02:06:11 -- common/autotest_common.sh@10 -- # set +x 00:36:11.616 02:06:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:11.873 [2024-04-24 02:06:11.831657] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.132 02:06:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.404 02:06:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:12.404 "name": "Existed_Raid", 00:36:12.404 "uuid": "e728f5eb-d996-42ee-b267-cc34986a85be", 00:36:12.404 "strip_size_kb": 64, 00:36:12.404 "state": "online", 00:36:12.404 "raid_level": "raid5f", 00:36:12.404 "superblock": true, 00:36:12.404 "num_base_bdevs": 4, 00:36:12.404 "num_base_bdevs_discovered": 3, 00:36:12.404 "num_base_bdevs_operational": 3, 00:36:12.404 "base_bdevs_list": [ 00:36:12.404 { 00:36:12.404 "name": null, 00:36:12.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.404 "is_configured": false, 00:36:12.404 "data_offset": 2048, 00:36:12.404 "data_size": 63488 00:36:12.404 }, 00:36:12.404 { 00:36:12.404 "name": "BaseBdev2", 00:36:12.404 "uuid": "4d2685b4-2c2f-40e5-bea8-c1f901cd03ef", 00:36:12.404 "is_configured": true, 00:36:12.404 "data_offset": 2048, 00:36:12.404 "data_size": 63488 00:36:12.404 }, 00:36:12.404 { 00:36:12.404 "name": "BaseBdev3", 00:36:12.404 "uuid": "5415102f-2b60-4f5d-896d-9f7b8fdf137d", 00:36:12.404 "is_configured": true, 00:36:12.404 "data_offset": 2048, 00:36:12.404 "data_size": 63488 00:36:12.404 }, 00:36:12.404 { 00:36:12.404 "name": "BaseBdev4", 00:36:12.404 "uuid": "36a5fab0-e3b1-406b-b340-0b52a9c25c4f", 00:36:12.404 "is_configured": true, 00:36:12.404 "data_offset": 2048, 00:36:12.404 "data_size": 63488 00:36:12.404 } 00:36:12.404 ] 00:36:12.404 }' 00:36:12.404 02:06:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:12.404 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:36:12.976 02:06:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:36:12.976 02:06:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:12.976 02:06:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.976 02:06:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:12.976 02:06:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:12.976 02:06:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:13.233 02:06:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:13.495 [2024-04-24 02:06:13.331945] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:13.495 [2024-04-24 02:06:13.332368] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:13.495 [2024-04-24 02:06:13.446096] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:13.495 02:06:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:13.495 02:06:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:13.495 02:06:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.495 02:06:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:14.064 02:06:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:14.064 02:06:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:14.064 02:06:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:14.321 [2024-04-24 02:06:14.182535] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:14.321 02:06:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:14.321 02:06:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:14.321 02:06:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.321 02:06:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:14.887 02:06:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:14.888 02:06:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:14.888 02:06:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:36:15.149 [2024-04-24 02:06:15.062698] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:15.149 [2024-04-24 02:06:15.063046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:36:15.149 02:06:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:15.149 02:06:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:15.149 02:06:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.149 02:06:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:36:15.746 02:06:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:36:15.746 02:06:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:36:15.746 02:06:15 -- bdev/bdev_raid.sh@287 -- # killprocess 139058 00:36:15.746 02:06:15 -- common/autotest_common.sh@936 -- # '[' -z 139058 ']' 00:36:15.746 02:06:15 -- common/autotest_common.sh@940 -- # kill -0 139058 00:36:15.746 02:06:15 -- common/autotest_common.sh@941 -- # uname 00:36:15.746 02:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:15.746 02:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139058 00:36:15.746 killing process with pid 139058 00:36:15.746 02:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:15.746 02:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:15.746 02:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139058' 00:36:15.746 02:06:15 -- common/autotest_common.sh@955 -- # kill 139058 00:36:15.746 02:06:15 -- common/autotest_common.sh@960 -- # wait 139058 00:36:15.746 [2024-04-24 02:06:15.582535] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:15.746 [2024-04-24 02:06:15.582673] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:17.122 ************************************ 00:36:17.122 END TEST raid5f_state_function_test_sb 00:36:17.122 ************************************ 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:17.122 00:36:17.122 real 0m17.714s 00:36:17.122 user 0m30.901s 00:36:17.122 sys 0m2.154s 00:36:17.122 02:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:17.122 02:06:17 -- common/autotest_common.sh@10 -- # set +x 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:36:17.122 02:06:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:36:17.122 02:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:17.122 02:06:17 -- common/autotest_common.sh@10 -- # set +x 00:36:17.122 ************************************ 00:36:17.122 START TEST raid5f_superblock_test 00:36:17.122 ************************************ 00:36:17.122 02:06:17 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 4 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=139543 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 139543 /var/tmp/spdk-raid.sock 00:36:17.122 02:06:17 -- common/autotest_common.sh@817 -- # '[' -z 139543 ']' 00:36:17.122 02:06:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:17.122 02:06:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:17.122 02:06:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:17.122 02:06:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:17.122 02:06:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:17.122 02:06:17 -- common/autotest_common.sh@10 -- # set +x 00:36:17.380 [2024-04-24 02:06:17.250739] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:36:17.380 [2024-04-24 02:06:17.251343] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139543 ] 00:36:17.380 [2024-04-24 02:06:17.455854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.638 [2024-04-24 02:06:17.698550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.901 [2024-04-24 02:06:17.957847] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:18.172 02:06:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:18.172 02:06:18 -- common/autotest_common.sh@850 -- # return 0 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:18.172 02:06:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:36:18.447 malloc1 00:36:18.447 02:06:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:18.729 [2024-04-24 02:06:18.706668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:18.729 [2024-04-24 02:06:18.707067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:18.729 [2024-04-24 02:06:18.707253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:36:18.730 [2024-04-24 02:06:18.707448] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:18.730 [2024-04-24 02:06:18.710858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:18.730 [2024-04-24 02:06:18.711109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:18.730 pt1 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:18.730 02:06:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:36:19.021 malloc2 00:36:19.021 02:06:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:19.280 [2024-04-24 02:06:19.303082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:19.280 [2024-04-24 02:06:19.303391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:19.280 [2024-04-24 02:06:19.303557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:19.280 [2024-04-24 02:06:19.303709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:19.280 [2024-04-24 02:06:19.306841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:19.280 [2024-04-24 02:06:19.307074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:19.280 pt2 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:19.280 02:06:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:36:19.536 malloc3 00:36:19.794 02:06:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:20.052 [2024-04-24 02:06:19.917254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:20.052 [2024-04-24 02:06:19.917537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.052 [2024-04-24 02:06:19.917628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:36:20.052 [2024-04-24 02:06:19.917765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.052 [2024-04-24 02:06:19.920404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.052 [2024-04-24 02:06:19.920605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:20.052 pt3 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:20.052 02:06:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:36:20.310 malloc4 00:36:20.310 02:06:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:20.568 [2024-04-24 02:06:20.570144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:20.568 [2024-04-24 02:06:20.570404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.568 [2024-04-24 02:06:20.570569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:20.568 [2024-04-24 02:06:20.570733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.568 [2024-04-24 02:06:20.573487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.568 [2024-04-24 02:06:20.573665] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:20.568 pt4 00:36:20.568 02:06:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:20.568 02:06:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:20.568 02:06:20 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:36:20.833 [2024-04-24 02:06:20.790222] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:20.833 [2024-04-24 02:06:20.792630] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:20.833 [2024-04-24 02:06:20.792861] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:20.833 [2024-04-24 02:06:20.793050] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:20.833 [2024-04-24 02:06:20.793410] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:36:20.833 [2024-04-24 02:06:20.793527] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:20.833 [2024-04-24 02:06:20.793715] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:36:20.833 [2024-04-24 02:06:20.803406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:36:20.833 [2024-04-24 02:06:20.803533] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:36:20.833 [2024-04-24 02:06:20.803833] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:20.833 02:06:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.105 02:06:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:21.105 "name": "raid_bdev1", 00:36:21.105 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:21.105 "strip_size_kb": 64, 00:36:21.105 "state": "online", 00:36:21.105 "raid_level": "raid5f", 00:36:21.105 "superblock": true, 00:36:21.105 "num_base_bdevs": 4, 00:36:21.105 "num_base_bdevs_discovered": 4, 00:36:21.105 "num_base_bdevs_operational": 4, 00:36:21.105 "base_bdevs_list": [ 00:36:21.105 { 00:36:21.105 "name": "pt1", 00:36:21.105 "uuid": "89ca7ee1-c314-521d-9242-3d3dc2bfaae6", 00:36:21.105 "is_configured": true, 00:36:21.105 "data_offset": 2048, 00:36:21.105 "data_size": 63488 00:36:21.105 }, 00:36:21.105 { 00:36:21.105 "name": "pt2", 00:36:21.105 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:21.105 "is_configured": true, 00:36:21.105 "data_offset": 2048, 00:36:21.105 "data_size": 63488 00:36:21.105 }, 00:36:21.105 { 00:36:21.105 "name": "pt3", 00:36:21.105 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:21.105 "is_configured": true, 00:36:21.105 "data_offset": 2048, 00:36:21.105 "data_size": 63488 00:36:21.105 }, 00:36:21.105 { 00:36:21.105 "name": "pt4", 00:36:21.105 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:21.105 "is_configured": true, 00:36:21.105 "data_offset": 2048, 00:36:21.105 "data_size": 63488 00:36:21.105 } 00:36:21.105 ] 00:36:21.105 }' 00:36:21.105 02:06:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:21.105 02:06:21 -- common/autotest_common.sh@10 -- # set +x 00:36:21.771 02:06:21 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:36:21.771 02:06:21 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:22.029 [2024-04-24 02:06:21.886642] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.029 02:06:21 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7313ea44-2baa-46d4-bc45-e077fb466b8f 00:36:22.029 02:06:21 -- bdev/bdev_raid.sh@380 -- # '[' -z 7313ea44-2baa-46d4-bc45-e077fb466b8f ']' 00:36:22.029 02:06:21 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:22.313 [2024-04-24 02:06:22.270569] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:22.313 [2024-04-24 02:06:22.270798] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:22.313 [2024-04-24 02:06:22.270985] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:22.313 [2024-04-24 02:06:22.271161] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:22.313 [2024-04-24 02:06:22.271276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:36:22.313 02:06:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.313 02:06:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:36:22.595 02:06:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:36:22.595 02:06:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:36:22.595 02:06:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:22.595 02:06:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:22.882 02:06:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:22.882 02:06:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:23.169 02:06:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:23.169 02:06:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:23.430 02:06:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:23.430 02:06:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:23.699 02:06:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:23.699 02:06:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:23.957 02:06:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:36:23.957 02:06:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:23.957 02:06:23 -- common/autotest_common.sh@638 -- # local es=0 00:36:23.957 02:06:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:23.957 02:06:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:23.957 02:06:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:23.957 02:06:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:23.957 02:06:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:23.957 02:06:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:23.957 02:06:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:23.957 02:06:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:23.957 02:06:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:23.958 02:06:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:24.216 [2024-04-24 02:06:24.238934] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:24.216 [2024-04-24 02:06:24.241404] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:24.216 [2024-04-24 02:06:24.241627] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:24.216 [2024-04-24 02:06:24.241792] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:36:24.216 [2024-04-24 02:06:24.241885] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:36:24.216 [2024-04-24 02:06:24.242178] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:36:24.217 [2024-04-24 02:06:24.242319] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:36:24.217 [2024-04-24 02:06:24.242414] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:36:24.217 [2024-04-24 02:06:24.242474] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:24.217 [2024-04-24 02:06:24.242640] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:36:24.217 request: 00:36:24.217 { 00:36:24.217 "name": "raid_bdev1", 00:36:24.217 "raid_level": "raid5f", 00:36:24.217 "base_bdevs": [ 00:36:24.217 "malloc1", 00:36:24.217 "malloc2", 00:36:24.217 "malloc3", 00:36:24.217 "malloc4" 00:36:24.217 ], 00:36:24.217 "superblock": false, 00:36:24.217 "strip_size_kb": 64, 00:36:24.217 "method": "bdev_raid_create", 00:36:24.217 "req_id": 1 00:36:24.217 } 00:36:24.217 Got JSON-RPC error response 00:36:24.217 response: 00:36:24.217 { 00:36:24.217 "code": -17, 00:36:24.217 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:24.217 } 00:36:24.217 02:06:24 -- common/autotest_common.sh@641 -- # es=1 00:36:24.217 02:06:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:24.217 02:06:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:24.217 02:06:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:24.217 02:06:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:36:24.217 02:06:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.786 02:06:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:36:24.786 02:06:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:36:24.786 02:06:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:24.786 [2024-04-24 02:06:24.863017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:24.786 [2024-04-24 02:06:24.863313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:24.786 [2024-04-24 02:06:24.863465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:24.786 [2024-04-24 02:06:24.863585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:24.786 [2024-04-24 02:06:24.866677] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:24.786 [2024-04-24 02:06:24.866898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:24.786 [2024-04-24 02:06:24.867166] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:24.786 [2024-04-24 02:06:24.867347] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:24.786 pt1 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.052 02:06:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.324 02:06:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:25.324 "name": "raid_bdev1", 00:36:25.324 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:25.324 "strip_size_kb": 64, 00:36:25.324 "state": "configuring", 00:36:25.324 "raid_level": "raid5f", 00:36:25.324 "superblock": true, 00:36:25.324 "num_base_bdevs": 4, 00:36:25.324 "num_base_bdevs_discovered": 1, 00:36:25.324 "num_base_bdevs_operational": 4, 00:36:25.324 "base_bdevs_list": [ 00:36:25.324 { 00:36:25.324 "name": "pt1", 00:36:25.324 "uuid": "89ca7ee1-c314-521d-9242-3d3dc2bfaae6", 00:36:25.324 "is_configured": true, 00:36:25.324 "data_offset": 2048, 00:36:25.324 "data_size": 63488 00:36:25.324 }, 00:36:25.324 { 00:36:25.324 "name": null, 00:36:25.324 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:25.324 "is_configured": false, 00:36:25.324 "data_offset": 2048, 00:36:25.324 "data_size": 63488 00:36:25.324 }, 00:36:25.324 { 00:36:25.324 "name": null, 00:36:25.324 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:25.324 "is_configured": false, 00:36:25.324 "data_offset": 2048, 00:36:25.324 "data_size": 63488 00:36:25.324 }, 00:36:25.324 { 00:36:25.324 "name": null, 00:36:25.324 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:25.324 "is_configured": false, 00:36:25.324 "data_offset": 2048, 00:36:25.324 "data_size": 63488 00:36:25.324 } 00:36:25.324 ] 00:36:25.324 }' 00:36:25.324 02:06:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:25.324 02:06:25 -- common/autotest_common.sh@10 -- # set +x 00:36:25.889 02:06:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:36:25.889 02:06:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:26.150 [2024-04-24 02:06:25.995492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:26.151 [2024-04-24 02:06:25.995762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.151 [2024-04-24 02:06:25.995842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:26.151 [2024-04-24 02:06:25.996040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.151 [2024-04-24 02:06:25.996616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.151 [2024-04-24 02:06:25.996789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:26.151 [2024-04-24 02:06:25.996999] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:26.151 [2024-04-24 02:06:25.997109] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:26.151 pt2 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:26.151 [2024-04-24 02:06:26.199592] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.151 02:06:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.410 02:06:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:26.410 "name": "raid_bdev1", 00:36:26.410 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:26.410 "strip_size_kb": 64, 00:36:26.410 "state": "configuring", 00:36:26.410 "raid_level": "raid5f", 00:36:26.410 "superblock": true, 00:36:26.410 "num_base_bdevs": 4, 00:36:26.410 "num_base_bdevs_discovered": 1, 00:36:26.410 "num_base_bdevs_operational": 4, 00:36:26.410 "base_bdevs_list": [ 00:36:26.410 { 00:36:26.410 "name": "pt1", 00:36:26.410 "uuid": "89ca7ee1-c314-521d-9242-3d3dc2bfaae6", 00:36:26.410 "is_configured": true, 00:36:26.410 "data_offset": 2048, 00:36:26.410 "data_size": 63488 00:36:26.410 }, 00:36:26.410 { 00:36:26.410 "name": null, 00:36:26.410 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:26.410 "is_configured": false, 00:36:26.410 "data_offset": 2048, 00:36:26.410 "data_size": 63488 00:36:26.410 }, 00:36:26.410 { 00:36:26.410 "name": null, 00:36:26.410 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:26.410 "is_configured": false, 00:36:26.410 "data_offset": 2048, 00:36:26.410 "data_size": 63488 00:36:26.410 }, 00:36:26.410 { 00:36:26.410 "name": null, 00:36:26.410 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:26.410 "is_configured": false, 00:36:26.410 "data_offset": 2048, 00:36:26.410 "data_size": 63488 00:36:26.410 } 00:36:26.410 ] 00:36:26.410 }' 00:36:26.410 02:06:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:26.410 02:06:26 -- common/autotest_common.sh@10 -- # set +x 00:36:26.976 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:36:26.976 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:26.976 02:06:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:27.234 [2024-04-24 02:06:27.311783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:27.234 [2024-04-24 02:06:27.312059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.235 [2024-04-24 02:06:27.312146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:27.235 [2024-04-24 02:06:27.312256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.235 [2024-04-24 02:06:27.312779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.235 [2024-04-24 02:06:27.312956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:27.235 [2024-04-24 02:06:27.313184] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:27.235 [2024-04-24 02:06:27.313315] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:27.235 pt2 00:36:27.492 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:27.492 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:27.492 02:06:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:27.749 [2024-04-24 02:06:27.623795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:27.749 [2024-04-24 02:06:27.624038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.749 [2024-04-24 02:06:27.624122] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:36:27.749 [2024-04-24 02:06:27.624248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.749 [2024-04-24 02:06:27.624761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.750 [2024-04-24 02:06:27.624923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:27.750 [2024-04-24 02:06:27.625115] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:27.750 [2024-04-24 02:06:27.625231] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:27.750 pt3 00:36:27.750 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:27.750 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:27.750 02:06:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:28.007 [2024-04-24 02:06:27.919913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:28.007 [2024-04-24 02:06:27.920221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:28.007 [2024-04-24 02:06:27.920358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:36:28.007 [2024-04-24 02:06:27.920472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:28.007 [2024-04-24 02:06:27.920966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:28.007 [2024-04-24 02:06:27.921143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:28.007 [2024-04-24 02:06:27.921361] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:28.007 [2024-04-24 02:06:27.921467] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:28.007 [2024-04-24 02:06:27.921647] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:36:28.007 [2024-04-24 02:06:27.921739] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:28.007 [2024-04-24 02:06:27.921916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:28.007 [2024-04-24 02:06:27.930392] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:36:28.007 [2024-04-24 02:06:27.930521] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:36:28.007 [2024-04-24 02:06:27.930794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:28.007 pt4 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:28.007 02:06:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:28.008 02:06:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.008 02:06:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.265 02:06:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:28.265 "name": "raid_bdev1", 00:36:28.265 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:28.265 "strip_size_kb": 64, 00:36:28.265 "state": "online", 00:36:28.265 "raid_level": "raid5f", 00:36:28.265 "superblock": true, 00:36:28.265 "num_base_bdevs": 4, 00:36:28.265 "num_base_bdevs_discovered": 4, 00:36:28.265 "num_base_bdevs_operational": 4, 00:36:28.265 "base_bdevs_list": [ 00:36:28.265 { 00:36:28.265 "name": "pt1", 00:36:28.265 "uuid": "89ca7ee1-c314-521d-9242-3d3dc2bfaae6", 00:36:28.265 "is_configured": true, 00:36:28.265 "data_offset": 2048, 00:36:28.265 "data_size": 63488 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "name": "pt2", 00:36:28.265 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:28.265 "is_configured": true, 00:36:28.265 "data_offset": 2048, 00:36:28.265 "data_size": 63488 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "name": "pt3", 00:36:28.265 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:28.265 "is_configured": true, 00:36:28.265 "data_offset": 2048, 00:36:28.265 "data_size": 63488 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "name": "pt4", 00:36:28.265 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:28.265 "is_configured": true, 00:36:28.265 "data_offset": 2048, 00:36:28.265 "data_size": 63488 00:36:28.265 } 00:36:28.265 ] 00:36:28.265 }' 00:36:28.265 02:06:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:28.265 02:06:28 -- common/autotest_common.sh@10 -- # set +x 00:36:28.830 02:06:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:28.830 02:06:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:36:28.830 [2024-04-24 02:06:28.904845] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:29.088 02:06:28 -- bdev/bdev_raid.sh@430 -- # '[' 7313ea44-2baa-46d4-bc45-e077fb466b8f '!=' 7313ea44-2baa-46d4-bc45-e077fb466b8f ']' 00:36:29.088 02:06:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:36:29.088 02:06:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:29.088 02:06:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:36:29.088 02:06:28 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:29.346 [2024-04-24 02:06:29.212832] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.346 02:06:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.603 02:06:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:29.603 "name": "raid_bdev1", 00:36:29.603 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:29.603 "strip_size_kb": 64, 00:36:29.603 "state": "online", 00:36:29.603 "raid_level": "raid5f", 00:36:29.603 "superblock": true, 00:36:29.603 "num_base_bdevs": 4, 00:36:29.603 "num_base_bdevs_discovered": 3, 00:36:29.603 "num_base_bdevs_operational": 3, 00:36:29.603 "base_bdevs_list": [ 00:36:29.603 { 00:36:29.603 "name": null, 00:36:29.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:29.603 "is_configured": false, 00:36:29.603 "data_offset": 2048, 00:36:29.603 "data_size": 63488 00:36:29.603 }, 00:36:29.603 { 00:36:29.603 "name": "pt2", 00:36:29.603 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:29.603 "is_configured": true, 00:36:29.603 "data_offset": 2048, 00:36:29.603 "data_size": 63488 00:36:29.603 }, 00:36:29.603 { 00:36:29.603 "name": "pt3", 00:36:29.603 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:29.603 "is_configured": true, 00:36:29.603 "data_offset": 2048, 00:36:29.603 "data_size": 63488 00:36:29.603 }, 00:36:29.603 { 00:36:29.603 "name": "pt4", 00:36:29.603 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:29.603 "is_configured": true, 00:36:29.603 "data_offset": 2048, 00:36:29.603 "data_size": 63488 00:36:29.603 } 00:36:29.603 ] 00:36:29.603 }' 00:36:29.603 02:06:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:29.603 02:06:29 -- common/autotest_common.sh@10 -- # set +x 00:36:30.169 02:06:30 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:30.427 [2024-04-24 02:06:30.429036] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:30.427 [2024-04-24 02:06:30.429255] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:30.427 [2024-04-24 02:06:30.429424] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:30.427 [2024-04-24 02:06:30.429595] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:30.427 [2024-04-24 02:06:30.429694] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:36:30.427 02:06:30 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.427 02:06:30 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:36:30.684 02:06:30 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:36:30.684 02:06:30 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:36:30.684 02:06:30 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:36:30.684 02:06:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:30.684 02:06:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:30.942 02:06:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:30.942 02:06:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:30.942 02:06:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:31.200 02:06:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:31.200 02:06:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:31.200 02:06:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:31.458 02:06:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:31.458 02:06:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:31.458 02:06:31 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:36:31.458 02:06:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:31.458 02:06:31 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:31.717 [2024-04-24 02:06:31.593238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:31.717 [2024-04-24 02:06:31.593568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:31.717 [2024-04-24 02:06:31.593885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:31.717 [2024-04-24 02:06:31.594112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:31.717 [2024-04-24 02:06:31.597529] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:31.717 [2024-04-24 02:06:31.597804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:31.717 [2024-04-24 02:06:31.598132] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:31.717 [2024-04-24 02:06:31.598345] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:31.717 pt2 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.717 02:06:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.975 02:06:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:31.975 "name": "raid_bdev1", 00:36:31.975 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:31.975 "strip_size_kb": 64, 00:36:31.975 "state": "configuring", 00:36:31.975 "raid_level": "raid5f", 00:36:31.975 "superblock": true, 00:36:31.975 "num_base_bdevs": 4, 00:36:31.975 "num_base_bdevs_discovered": 1, 00:36:31.975 "num_base_bdevs_operational": 3, 00:36:31.975 "base_bdevs_list": [ 00:36:31.975 { 00:36:31.975 "name": null, 00:36:31.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.975 "is_configured": false, 00:36:31.975 "data_offset": 2048, 00:36:31.975 "data_size": 63488 00:36:31.975 }, 00:36:31.975 { 00:36:31.975 "name": "pt2", 00:36:31.975 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:31.975 "is_configured": true, 00:36:31.975 "data_offset": 2048, 00:36:31.975 "data_size": 63488 00:36:31.975 }, 00:36:31.975 { 00:36:31.975 "name": null, 00:36:31.975 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:31.975 "is_configured": false, 00:36:31.975 "data_offset": 2048, 00:36:31.975 "data_size": 63488 00:36:31.975 }, 00:36:31.975 { 00:36:31.975 "name": null, 00:36:31.975 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:31.975 "is_configured": false, 00:36:31.975 "data_offset": 2048, 00:36:31.975 "data_size": 63488 00:36:31.975 } 00:36:31.975 ] 00:36:31.975 }' 00:36:31.975 02:06:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:31.975 02:06:31 -- common/autotest_common.sh@10 -- # set +x 00:36:32.550 02:06:32 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:36:32.550 02:06:32 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:32.550 02:06:32 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:32.808 [2024-04-24 02:06:32.845131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:32.808 [2024-04-24 02:06:32.845423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.808 [2024-04-24 02:06:32.845564] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:36:32.808 [2024-04-24 02:06:32.845682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.808 [2024-04-24 02:06:32.846338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.808 [2024-04-24 02:06:32.846514] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:32.808 [2024-04-24 02:06:32.846729] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:32.808 [2024-04-24 02:06:32.846845] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:32.808 pt3 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.808 02:06:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.066 02:06:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:33.066 "name": "raid_bdev1", 00:36:33.066 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:33.066 "strip_size_kb": 64, 00:36:33.066 "state": "configuring", 00:36:33.066 "raid_level": "raid5f", 00:36:33.066 "superblock": true, 00:36:33.066 "num_base_bdevs": 4, 00:36:33.066 "num_base_bdevs_discovered": 2, 00:36:33.066 "num_base_bdevs_operational": 3, 00:36:33.066 "base_bdevs_list": [ 00:36:33.066 { 00:36:33.066 "name": null, 00:36:33.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.066 "is_configured": false, 00:36:33.066 "data_offset": 2048, 00:36:33.066 "data_size": 63488 00:36:33.066 }, 00:36:33.066 { 00:36:33.066 "name": "pt2", 00:36:33.066 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:33.066 "is_configured": true, 00:36:33.066 "data_offset": 2048, 00:36:33.066 "data_size": 63488 00:36:33.066 }, 00:36:33.066 { 00:36:33.066 "name": "pt3", 00:36:33.066 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:33.066 "is_configured": true, 00:36:33.066 "data_offset": 2048, 00:36:33.066 "data_size": 63488 00:36:33.066 }, 00:36:33.066 { 00:36:33.066 "name": null, 00:36:33.066 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:33.066 "is_configured": false, 00:36:33.066 "data_offset": 2048, 00:36:33.066 "data_size": 63488 00:36:33.066 } 00:36:33.066 ] 00:36:33.066 }' 00:36:33.066 02:06:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:33.066 02:06:33 -- common/autotest_common.sh@10 -- # set +x 00:36:34.072 02:06:33 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:36:34.072 02:06:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:34.072 02:06:33 -- bdev/bdev_raid.sh@462 -- # i=3 00:36:34.072 02:06:33 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:34.072 [2024-04-24 02:06:34.057823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:34.072 [2024-04-24 02:06:34.058119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.072 [2024-04-24 02:06:34.058207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:36:34.072 [2024-04-24 02:06:34.058462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.072 [2024-04-24 02:06:34.059036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.072 [2024-04-24 02:06:34.059191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:34.072 [2024-04-24 02:06:34.059444] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:34.072 [2024-04-24 02:06:34.059629] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:34.072 [2024-04-24 02:06:34.059942] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:36:34.072 [2024-04-24 02:06:34.060061] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:34.072 [2024-04-24 02:06:34.060239] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:36:34.072 [2024-04-24 02:06:34.068468] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:36:34.072 [2024-04-24 02:06:34.068607] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:36:34.072 [2024-04-24 02:06:34.069042] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.072 pt4 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:34.072 02:06:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:34.073 02:06:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:34.073 02:06:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:34.073 02:06:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:34.073 02:06:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.073 02:06:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.331 02:06:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:34.331 "name": "raid_bdev1", 00:36:34.331 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:34.331 "strip_size_kb": 64, 00:36:34.331 "state": "online", 00:36:34.331 "raid_level": "raid5f", 00:36:34.331 "superblock": true, 00:36:34.331 "num_base_bdevs": 4, 00:36:34.331 "num_base_bdevs_discovered": 3, 00:36:34.331 "num_base_bdevs_operational": 3, 00:36:34.331 "base_bdevs_list": [ 00:36:34.331 { 00:36:34.331 "name": null, 00:36:34.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.331 "is_configured": false, 00:36:34.331 "data_offset": 2048, 00:36:34.331 "data_size": 63488 00:36:34.331 }, 00:36:34.331 { 00:36:34.331 "name": "pt2", 00:36:34.331 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:34.331 "is_configured": true, 00:36:34.331 "data_offset": 2048, 00:36:34.331 "data_size": 63488 00:36:34.331 }, 00:36:34.331 { 00:36:34.331 "name": "pt3", 00:36:34.331 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:34.331 "is_configured": true, 00:36:34.331 "data_offset": 2048, 00:36:34.331 "data_size": 63488 00:36:34.331 }, 00:36:34.331 { 00:36:34.331 "name": "pt4", 00:36:34.331 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:34.331 "is_configured": true, 00:36:34.331 "data_offset": 2048, 00:36:34.331 "data_size": 63488 00:36:34.331 } 00:36:34.331 ] 00:36:34.332 }' 00:36:34.332 02:06:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:34.332 02:06:34 -- common/autotest_common.sh@10 -- # set +x 00:36:34.898 02:06:34 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:36:34.898 02:06:34 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:35.462 [2024-04-24 02:06:35.264619] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:35.462 [2024-04-24 02:06:35.264890] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:35.462 [2024-04-24 02:06:35.265077] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:35.462 [2024-04-24 02:06:35.265258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:35.462 [2024-04-24 02:06:35.265359] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:36:35.462 02:06:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:36:35.462 02:06:35 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.721 02:06:35 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:36:35.721 02:06:35 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:36:35.721 02:06:35 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:35.995 [2024-04-24 02:06:35.864507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:35.995 [2024-04-24 02:06:35.865251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.995 [2024-04-24 02:06:35.865573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:36:35.995 [2024-04-24 02:06:35.865843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.995 [2024-04-24 02:06:35.869099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.995 [2024-04-24 02:06:35.869436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:35.995 [2024-04-24 02:06:35.869863] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:35.995 [2024-04-24 02:06:35.870055] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:35.995 pt1 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.995 02:06:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.253 02:06:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:36.253 "name": "raid_bdev1", 00:36:36.253 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:36.253 "strip_size_kb": 64, 00:36:36.253 "state": "configuring", 00:36:36.253 "raid_level": "raid5f", 00:36:36.253 "superblock": true, 00:36:36.253 "num_base_bdevs": 4, 00:36:36.253 "num_base_bdevs_discovered": 1, 00:36:36.253 "num_base_bdevs_operational": 4, 00:36:36.253 "base_bdevs_list": [ 00:36:36.253 { 00:36:36.253 "name": "pt1", 00:36:36.253 "uuid": "89ca7ee1-c314-521d-9242-3d3dc2bfaae6", 00:36:36.253 "is_configured": true, 00:36:36.253 "data_offset": 2048, 00:36:36.253 "data_size": 63488 00:36:36.253 }, 00:36:36.253 { 00:36:36.253 "name": null, 00:36:36.253 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:36.253 "is_configured": false, 00:36:36.253 "data_offset": 2048, 00:36:36.253 "data_size": 63488 00:36:36.253 }, 00:36:36.253 { 00:36:36.253 "name": null, 00:36:36.253 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:36.253 "is_configured": false, 00:36:36.253 "data_offset": 2048, 00:36:36.253 "data_size": 63488 00:36:36.253 }, 00:36:36.253 { 00:36:36.253 "name": null, 00:36:36.253 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:36.253 "is_configured": false, 00:36:36.253 "data_offset": 2048, 00:36:36.253 "data_size": 63488 00:36:36.253 } 00:36:36.253 ] 00:36:36.253 }' 00:36:36.253 02:06:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:36.253 02:06:36 -- common/autotest_common.sh@10 -- # set +x 00:36:36.818 02:06:36 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:36:36.818 02:06:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:36.818 02:06:36 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:37.075 02:06:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:37.075 02:06:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:37.075 02:06:36 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:37.075 02:06:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:37.075 02:06:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:37.075 02:06:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:37.333 02:06:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:37.333 02:06:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:37.333 02:06:37 -- bdev/bdev_raid.sh@489 -- # i=3 00:36:37.333 02:06:37 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:37.592 [2024-04-24 02:06:37.522309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:37.592 [2024-04-24 02:06:37.522719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.592 [2024-04-24 02:06:37.522929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:36:37.592 [2024-04-24 02:06:37.523110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.592 [2024-04-24 02:06:37.523915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.592 [2024-04-24 02:06:37.524186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:37.592 [2024-04-24 02:06:37.524534] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:37.592 [2024-04-24 02:06:37.524689] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:37.592 [2024-04-24 02:06:37.524809] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:37.592 [2024-04-24 02:06:37.524908] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:36:37.592 [2024-04-24 02:06:37.525161] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:37.592 pt4 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.592 02:06:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.850 02:06:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:37.850 "name": "raid_bdev1", 00:36:37.850 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:37.850 "strip_size_kb": 64, 00:36:37.850 "state": "configuring", 00:36:37.850 "raid_level": "raid5f", 00:36:37.850 "superblock": true, 00:36:37.850 "num_base_bdevs": 4, 00:36:37.850 "num_base_bdevs_discovered": 1, 00:36:37.850 "num_base_bdevs_operational": 3, 00:36:37.850 "base_bdevs_list": [ 00:36:37.850 { 00:36:37.850 "name": null, 00:36:37.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.850 "is_configured": false, 00:36:37.850 "data_offset": 2048, 00:36:37.850 "data_size": 63488 00:36:37.850 }, 00:36:37.850 { 00:36:37.850 "name": null, 00:36:37.850 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:37.850 "is_configured": false, 00:36:37.850 "data_offset": 2048, 00:36:37.850 "data_size": 63488 00:36:37.850 }, 00:36:37.850 { 00:36:37.850 "name": null, 00:36:37.850 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:37.850 "is_configured": false, 00:36:37.850 "data_offset": 2048, 00:36:37.850 "data_size": 63488 00:36:37.850 }, 00:36:37.850 { 00:36:37.850 "name": "pt4", 00:36:37.850 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:37.850 "is_configured": true, 00:36:37.850 "data_offset": 2048, 00:36:37.850 "data_size": 63488 00:36:37.850 } 00:36:37.850 ] 00:36:37.850 }' 00:36:37.850 02:06:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:37.850 02:06:37 -- common/autotest_common.sh@10 -- # set +x 00:36:38.415 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:36:38.415 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:38.415 02:06:38 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:38.673 [2024-04-24 02:06:38.625928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:38.673 [2024-04-24 02:06:38.626045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.673 [2024-04-24 02:06:38.626099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:36:38.673 [2024-04-24 02:06:38.626149] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.673 [2024-04-24 02:06:38.626723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.673 [2024-04-24 02:06:38.626813] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:38.673 [2024-04-24 02:06:38.626956] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:38.673 [2024-04-24 02:06:38.626992] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:38.673 pt2 00:36:38.673 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:36:38.673 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:38.673 02:06:38 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:38.931 [2024-04-24 02:06:38.818016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:38.931 [2024-04-24 02:06:38.818170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.931 [2024-04-24 02:06:38.818234] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:36:38.931 [2024-04-24 02:06:38.818280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.931 [2024-04-24 02:06:38.819020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.931 [2024-04-24 02:06:38.819135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:38.931 [2024-04-24 02:06:38.819304] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:38.931 [2024-04-24 02:06:38.819348] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:38.931 [2024-04-24 02:06:38.819570] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:36:38.931 [2024-04-24 02:06:38.819608] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:38.931 [2024-04-24 02:06:38.819742] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:36:38.931 [2024-04-24 02:06:38.828528] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:36:38.931 [2024-04-24 02:06:38.828557] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:36:38.931 [2024-04-24 02:06:38.828833] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:38.931 pt3 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:38.931 02:06:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.932 02:06:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.190 02:06:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:39.190 "name": "raid_bdev1", 00:36:39.190 "uuid": "7313ea44-2baa-46d4-bc45-e077fb466b8f", 00:36:39.190 "strip_size_kb": 64, 00:36:39.190 "state": "online", 00:36:39.190 "raid_level": "raid5f", 00:36:39.190 "superblock": true, 00:36:39.190 "num_base_bdevs": 4, 00:36:39.190 "num_base_bdevs_discovered": 3, 00:36:39.190 "num_base_bdevs_operational": 3, 00:36:39.190 "base_bdevs_list": [ 00:36:39.190 { 00:36:39.190 "name": null, 00:36:39.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.190 "is_configured": false, 00:36:39.190 "data_offset": 2048, 00:36:39.190 "data_size": 63488 00:36:39.190 }, 00:36:39.190 { 00:36:39.190 "name": "pt2", 00:36:39.190 "uuid": "00939ab3-ef5d-5692-a55e-5f95b2bba13e", 00:36:39.190 "is_configured": true, 00:36:39.190 "data_offset": 2048, 00:36:39.190 "data_size": 63488 00:36:39.190 }, 00:36:39.190 { 00:36:39.190 "name": "pt3", 00:36:39.190 "uuid": "effcc703-f05c-5d42-8692-bdc8d46ab798", 00:36:39.190 "is_configured": true, 00:36:39.190 "data_offset": 2048, 00:36:39.190 "data_size": 63488 00:36:39.190 }, 00:36:39.190 { 00:36:39.190 "name": "pt4", 00:36:39.190 "uuid": "eb104206-a3c2-5b7d-a77e-fdd0bd6f58cb", 00:36:39.190 "is_configured": true, 00:36:39.190 "data_offset": 2048, 00:36:39.190 "data_size": 63488 00:36:39.190 } 00:36:39.190 ] 00:36:39.190 }' 00:36:39.190 02:06:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:39.190 02:06:39 -- common/autotest_common.sh@10 -- # set +x 00:36:39.758 02:06:39 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:39.758 02:06:39 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:36:40.016 [2024-04-24 02:06:39.863592] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:40.016 02:06:39 -- bdev/bdev_raid.sh@506 -- # '[' 7313ea44-2baa-46d4-bc45-e077fb466b8f '!=' 7313ea44-2baa-46d4-bc45-e077fb466b8f ']' 00:36:40.016 02:06:39 -- bdev/bdev_raid.sh@511 -- # killprocess 139543 00:36:40.016 02:06:39 -- common/autotest_common.sh@936 -- # '[' -z 139543 ']' 00:36:40.016 02:06:39 -- common/autotest_common.sh@940 -- # kill -0 139543 00:36:40.016 02:06:39 -- common/autotest_common.sh@941 -- # uname 00:36:40.016 02:06:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:40.016 02:06:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139543 00:36:40.016 02:06:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:40.016 02:06:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:40.016 02:06:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139543' 00:36:40.016 killing process with pid 139543 00:36:40.016 02:06:39 -- common/autotest_common.sh@955 -- # kill 139543 00:36:40.016 02:06:39 -- common/autotest_common.sh@960 -- # wait 139543 00:36:40.016 [2024-04-24 02:06:39.908766] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:40.016 [2024-04-24 02:06:39.908855] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.016 [2024-04-24 02:06:39.908932] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:40.016 [2024-04-24 02:06:39.908952] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:36:40.274 [2024-04-24 02:06:40.355520] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:36:42.179 00:36:42.179 real 0m24.622s 00:36:42.179 user 0m43.981s 00:36:42.179 sys 0m3.313s 00:36:42.179 02:06:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:42.179 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:36:42.179 ************************************ 00:36:42.179 END TEST raid5f_superblock_test 00:36:42.179 ************************************ 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:36:42.179 02:06:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:36:42.179 02:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:42.179 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:36:42.179 ************************************ 00:36:42.179 START TEST raid5f_rebuild_test 00:36:42.179 ************************************ 00:36:42.179 02:06:41 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 false false 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=140241 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140241 /var/tmp/spdk-raid.sock 00:36:42.179 02:06:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:42.179 02:06:41 -- common/autotest_common.sh@817 -- # '[' -z 140241 ']' 00:36:42.179 02:06:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:42.179 02:06:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:42.179 02:06:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:42.179 02:06:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:42.179 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:36:42.179 [2024-04-24 02:06:41.956062] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:36:42.179 [2024-04-24 02:06:41.956208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140241 ] 00:36:42.179 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:42.179 Zero copy mechanism will not be used. 00:36:42.179 [2024-04-24 02:06:42.116662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.439 [2024-04-24 02:06:42.348751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.697 [2024-04-24 02:06:42.592919] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:42.957 02:06:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:42.957 02:06:42 -- common/autotest_common.sh@850 -- # return 0 00:36:42.957 02:06:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:42.957 02:06:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:42.957 02:06:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:43.216 BaseBdev1 00:36:43.216 02:06:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:43.216 02:06:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:43.216 02:06:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:43.475 BaseBdev2 00:36:43.475 02:06:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:43.475 02:06:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:43.475 02:06:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:44.043 BaseBdev3 00:36:44.043 02:06:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:44.043 02:06:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:44.043 02:06:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:44.301 BaseBdev4 00:36:44.301 02:06:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:36:44.560 spare_malloc 00:36:44.560 02:06:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:44.560 spare_delay 00:36:44.560 02:06:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:45.127 [2024-04-24 02:06:44.904986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:45.127 [2024-04-24 02:06:44.905088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:45.127 [2024-04-24 02:06:44.905123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:36:45.128 [2024-04-24 02:06:44.905175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:45.128 [2024-04-24 02:06:44.907847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:45.128 [2024-04-24 02:06:44.907912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:45.128 spare 00:36:45.128 02:06:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:36:45.128 [2024-04-24 02:06:45.113119] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:45.128 [2024-04-24 02:06:45.115357] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:45.128 [2024-04-24 02:06:45.115423] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:45.128 [2024-04-24 02:06:45.115456] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:45.128 [2024-04-24 02:06:45.115540] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:36:45.128 [2024-04-24 02:06:45.115549] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:45.128 [2024-04-24 02:06:45.115727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:45.128 [2024-04-24 02:06:45.124951] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:36:45.128 [2024-04-24 02:06:45.124983] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:36:45.128 [2024-04-24 02:06:45.125203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.128 02:06:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.387 02:06:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:45.387 "name": "raid_bdev1", 00:36:45.387 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:45.387 "strip_size_kb": 64, 00:36:45.387 "state": "online", 00:36:45.387 "raid_level": "raid5f", 00:36:45.387 "superblock": false, 00:36:45.387 "num_base_bdevs": 4, 00:36:45.387 "num_base_bdevs_discovered": 4, 00:36:45.387 "num_base_bdevs_operational": 4, 00:36:45.387 "base_bdevs_list": [ 00:36:45.387 { 00:36:45.387 "name": "BaseBdev1", 00:36:45.387 "uuid": "b09b2711-92fa-400d-bf1c-0ea0d8523a99", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 0, 00:36:45.387 "data_size": 65536 00:36:45.387 }, 00:36:45.387 { 00:36:45.387 "name": "BaseBdev2", 00:36:45.387 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 0, 00:36:45.387 "data_size": 65536 00:36:45.387 }, 00:36:45.387 { 00:36:45.387 "name": "BaseBdev3", 00:36:45.387 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 0, 00:36:45.387 "data_size": 65536 00:36:45.387 }, 00:36:45.387 { 00:36:45.387 "name": "BaseBdev4", 00:36:45.387 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 0, 00:36:45.387 "data_size": 65536 00:36:45.387 } 00:36:45.387 ] 00:36:45.387 }' 00:36:45.387 02:06:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:45.387 02:06:45 -- common/autotest_common.sh@10 -- # set +x 00:36:45.953 02:06:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:45.953 02:06:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:36:46.211 [2024-04-24 02:06:46.227619] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:46.211 02:06:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:36:46.211 02:06:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:46.211 02:06:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.470 02:06:46 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:36:46.470 02:06:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:36:46.470 02:06:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:36:46.470 02:06:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@12 -- # local i 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:46.470 02:06:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:46.728 [2024-04-24 02:06:46.731580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:36:46.728 /dev/nbd0 00:36:46.728 02:06:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:46.728 02:06:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:46.728 02:06:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:36:46.728 02:06:46 -- common/autotest_common.sh@855 -- # local i 00:36:46.728 02:06:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:36:46.728 02:06:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:36:46.728 02:06:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:36:46.728 02:06:46 -- common/autotest_common.sh@859 -- # break 00:36:46.728 02:06:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:46.728 02:06:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:46.728 02:06:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:46.728 1+0 records in 00:36:46.728 1+0 records out 00:36:46.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330363 s, 12.4 MB/s 00:36:46.728 02:06:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.728 02:06:46 -- common/autotest_common.sh@872 -- # size=4096 00:36:46.728 02:06:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.728 02:06:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:36:46.728 02:06:46 -- common/autotest_common.sh@875 -- # return 0 00:36:46.728 02:06:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:46.728 02:06:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:46.986 02:06:46 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:36:46.986 02:06:46 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:36:46.986 02:06:46 -- bdev/bdev_raid.sh@582 -- # echo 192 00:36:46.986 02:06:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:36:47.552 512+0 records in 00:36:47.552 512+0 records out 00:36:47.552 100663296 bytes (101 MB, 96 MiB) copied, 0.665996 s, 151 MB/s 00:36:47.552 02:06:47 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@51 -- # local i 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:47.552 02:06:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:48.118 [2024-04-24 02:06:47.918086] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@41 -- # break 00:36:48.118 02:06:47 -- bdev/nbd_common.sh@45 -- # return 0 00:36:48.118 02:06:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:48.118 [2024-04-24 02:06:48.177742] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:48.118 02:06:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:48.377 02:06:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.377 02:06:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.636 02:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:48.636 "name": "raid_bdev1", 00:36:48.636 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:48.636 "strip_size_kb": 64, 00:36:48.636 "state": "online", 00:36:48.636 "raid_level": "raid5f", 00:36:48.636 "superblock": false, 00:36:48.636 "num_base_bdevs": 4, 00:36:48.636 "num_base_bdevs_discovered": 3, 00:36:48.636 "num_base_bdevs_operational": 3, 00:36:48.636 "base_bdevs_list": [ 00:36:48.636 { 00:36:48.636 "name": null, 00:36:48.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:48.636 "is_configured": false, 00:36:48.636 "data_offset": 0, 00:36:48.636 "data_size": 65536 00:36:48.636 }, 00:36:48.636 { 00:36:48.636 "name": "BaseBdev2", 00:36:48.636 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:48.636 "is_configured": true, 00:36:48.636 "data_offset": 0, 00:36:48.636 "data_size": 65536 00:36:48.636 }, 00:36:48.636 { 00:36:48.636 "name": "BaseBdev3", 00:36:48.636 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:48.636 "is_configured": true, 00:36:48.636 "data_offset": 0, 00:36:48.636 "data_size": 65536 00:36:48.636 }, 00:36:48.636 { 00:36:48.636 "name": "BaseBdev4", 00:36:48.636 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:48.636 "is_configured": true, 00:36:48.636 "data_offset": 0, 00:36:48.636 "data_size": 65536 00:36:48.636 } 00:36:48.636 ] 00:36:48.636 }' 00:36:48.636 02:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:48.636 02:06:48 -- common/autotest_common.sh@10 -- # set +x 00:36:49.202 02:06:49 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:49.461 [2024-04-24 02:06:49.398071] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:36:49.461 [2024-04-24 02:06:49.398150] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:49.461 [2024-04-24 02:06:49.418259] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:36:49.461 [2024-04-24 02:06:49.431481] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:49.461 02:06:49 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:50.396 02:06:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.655 02:06:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:50.655 "name": "raid_bdev1", 00:36:50.655 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:50.655 "strip_size_kb": 64, 00:36:50.655 "state": "online", 00:36:50.655 "raid_level": "raid5f", 00:36:50.655 "superblock": false, 00:36:50.655 "num_base_bdevs": 4, 00:36:50.655 "num_base_bdevs_discovered": 4, 00:36:50.655 "num_base_bdevs_operational": 4, 00:36:50.655 "process": { 00:36:50.655 "type": "rebuild", 00:36:50.655 "target": "spare", 00:36:50.655 "progress": { 00:36:50.655 "blocks": 23040, 00:36:50.655 "percent": 11 00:36:50.655 } 00:36:50.655 }, 00:36:50.655 "base_bdevs_list": [ 00:36:50.655 { 00:36:50.655 "name": "spare", 00:36:50.655 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:50.655 "is_configured": true, 00:36:50.655 "data_offset": 0, 00:36:50.655 "data_size": 65536 00:36:50.655 }, 00:36:50.655 { 00:36:50.655 "name": "BaseBdev2", 00:36:50.655 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:50.655 "is_configured": true, 00:36:50.655 "data_offset": 0, 00:36:50.655 "data_size": 65536 00:36:50.655 }, 00:36:50.655 { 00:36:50.655 "name": "BaseBdev3", 00:36:50.655 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:50.655 "is_configured": true, 00:36:50.655 "data_offset": 0, 00:36:50.655 "data_size": 65536 00:36:50.655 }, 00:36:50.655 { 00:36:50.655 "name": "BaseBdev4", 00:36:50.655 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:50.655 "is_configured": true, 00:36:50.655 "data_offset": 0, 00:36:50.655 "data_size": 65536 00:36:50.655 } 00:36:50.655 ] 00:36:50.655 }' 00:36:50.655 02:06:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:50.914 02:06:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:50.914 02:06:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:50.914 02:06:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:50.914 02:06:50 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:51.172 [2024-04-24 02:06:51.109981] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:51.172 [2024-04-24 02:06:51.146163] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:51.172 [2024-04-24 02:06:51.146332] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.172 02:06:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.431 02:06:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:51.431 "name": "raid_bdev1", 00:36:51.431 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:51.431 "strip_size_kb": 64, 00:36:51.431 "state": "online", 00:36:51.431 "raid_level": "raid5f", 00:36:51.431 "superblock": false, 00:36:51.431 "num_base_bdevs": 4, 00:36:51.431 "num_base_bdevs_discovered": 3, 00:36:51.431 "num_base_bdevs_operational": 3, 00:36:51.431 "base_bdevs_list": [ 00:36:51.431 { 00:36:51.431 "name": null, 00:36:51.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.431 "is_configured": false, 00:36:51.431 "data_offset": 0, 00:36:51.431 "data_size": 65536 00:36:51.431 }, 00:36:51.431 { 00:36:51.431 "name": "BaseBdev2", 00:36:51.431 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:51.431 "is_configured": true, 00:36:51.431 "data_offset": 0, 00:36:51.431 "data_size": 65536 00:36:51.431 }, 00:36:51.431 { 00:36:51.431 "name": "BaseBdev3", 00:36:51.431 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:51.431 "is_configured": true, 00:36:51.431 "data_offset": 0, 00:36:51.431 "data_size": 65536 00:36:51.431 }, 00:36:51.431 { 00:36:51.431 "name": "BaseBdev4", 00:36:51.431 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:51.431 "is_configured": true, 00:36:51.431 "data_offset": 0, 00:36:51.431 "data_size": 65536 00:36:51.431 } 00:36:51.431 ] 00:36:51.431 }' 00:36:51.431 02:06:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:51.431 02:06:51 -- common/autotest_common.sh@10 -- # set +x 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:52.429 "name": "raid_bdev1", 00:36:52.429 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:52.429 "strip_size_kb": 64, 00:36:52.429 "state": "online", 00:36:52.429 "raid_level": "raid5f", 00:36:52.429 "superblock": false, 00:36:52.429 "num_base_bdevs": 4, 00:36:52.429 "num_base_bdevs_discovered": 3, 00:36:52.429 "num_base_bdevs_operational": 3, 00:36:52.429 "base_bdevs_list": [ 00:36:52.429 { 00:36:52.429 "name": null, 00:36:52.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.429 "is_configured": false, 00:36:52.429 "data_offset": 0, 00:36:52.429 "data_size": 65536 00:36:52.429 }, 00:36:52.429 { 00:36:52.429 "name": "BaseBdev2", 00:36:52.429 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:52.429 "is_configured": true, 00:36:52.429 "data_offset": 0, 00:36:52.429 "data_size": 65536 00:36:52.429 }, 00:36:52.429 { 00:36:52.429 "name": "BaseBdev3", 00:36:52.429 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:52.429 "is_configured": true, 00:36:52.429 "data_offset": 0, 00:36:52.429 "data_size": 65536 00:36:52.429 }, 00:36:52.429 { 00:36:52.429 "name": "BaseBdev4", 00:36:52.429 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:52.429 "is_configured": true, 00:36:52.429 "data_offset": 0, 00:36:52.429 "data_size": 65536 00:36:52.429 } 00:36:52.429 ] 00:36:52.429 }' 00:36:52.429 02:06:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:52.692 02:06:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:52.692 02:06:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:52.692 02:06:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:52.692 02:06:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:52.950 [2024-04-24 02:06:52.863262] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:36:52.950 [2024-04-24 02:06:52.863338] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:52.950 [2024-04-24 02:06:52.882928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:36:52.950 [2024-04-24 02:06:52.895694] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:52.950 02:06:52 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.887 02:06:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.145 02:06:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:54.145 "name": "raid_bdev1", 00:36:54.145 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:54.145 "strip_size_kb": 64, 00:36:54.145 "state": "online", 00:36:54.145 "raid_level": "raid5f", 00:36:54.145 "superblock": false, 00:36:54.145 "num_base_bdevs": 4, 00:36:54.145 "num_base_bdevs_discovered": 4, 00:36:54.145 "num_base_bdevs_operational": 4, 00:36:54.145 "process": { 00:36:54.145 "type": "rebuild", 00:36:54.145 "target": "spare", 00:36:54.145 "progress": { 00:36:54.145 "blocks": 23040, 00:36:54.145 "percent": 11 00:36:54.145 } 00:36:54.145 }, 00:36:54.145 "base_bdevs_list": [ 00:36:54.145 { 00:36:54.145 "name": "spare", 00:36:54.145 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:54.145 "is_configured": true, 00:36:54.145 "data_offset": 0, 00:36:54.145 "data_size": 65536 00:36:54.145 }, 00:36:54.145 { 00:36:54.145 "name": "BaseBdev2", 00:36:54.145 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:54.145 "is_configured": true, 00:36:54.145 "data_offset": 0, 00:36:54.145 "data_size": 65536 00:36:54.145 }, 00:36:54.145 { 00:36:54.145 "name": "BaseBdev3", 00:36:54.145 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:54.145 "is_configured": true, 00:36:54.145 "data_offset": 0, 00:36:54.145 "data_size": 65536 00:36:54.145 }, 00:36:54.145 { 00:36:54.145 "name": "BaseBdev4", 00:36:54.145 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:54.145 "is_configured": true, 00:36:54.145 "data_offset": 0, 00:36:54.145 "data_size": 65536 00:36:54.145 } 00:36:54.145 ] 00:36:54.145 }' 00:36:54.145 02:06:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:54.404 02:06:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:54.404 02:06:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:54.404 02:06:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@657 -- # local timeout=789 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.405 02:06:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:54.664 "name": "raid_bdev1", 00:36:54.664 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:54.664 "strip_size_kb": 64, 00:36:54.664 "state": "online", 00:36:54.664 "raid_level": "raid5f", 00:36:54.664 "superblock": false, 00:36:54.664 "num_base_bdevs": 4, 00:36:54.664 "num_base_bdevs_discovered": 4, 00:36:54.664 "num_base_bdevs_operational": 4, 00:36:54.664 "process": { 00:36:54.664 "type": "rebuild", 00:36:54.664 "target": "spare", 00:36:54.664 "progress": { 00:36:54.664 "blocks": 30720, 00:36:54.664 "percent": 15 00:36:54.664 } 00:36:54.664 }, 00:36:54.664 "base_bdevs_list": [ 00:36:54.664 { 00:36:54.664 "name": "spare", 00:36:54.664 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:54.664 "is_configured": true, 00:36:54.664 "data_offset": 0, 00:36:54.664 "data_size": 65536 00:36:54.664 }, 00:36:54.664 { 00:36:54.664 "name": "BaseBdev2", 00:36:54.664 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:54.664 "is_configured": true, 00:36:54.664 "data_offset": 0, 00:36:54.664 "data_size": 65536 00:36:54.664 }, 00:36:54.664 { 00:36:54.664 "name": "BaseBdev3", 00:36:54.664 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:54.664 "is_configured": true, 00:36:54.664 "data_offset": 0, 00:36:54.664 "data_size": 65536 00:36:54.664 }, 00:36:54.664 { 00:36:54.664 "name": "BaseBdev4", 00:36:54.664 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:54.664 "is_configured": true, 00:36:54.664 "data_offset": 0, 00:36:54.664 "data_size": 65536 00:36:54.664 } 00:36:54.664 ] 00:36:54.664 }' 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:54.664 02:06:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:56.034 "name": "raid_bdev1", 00:36:56.034 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:56.034 "strip_size_kb": 64, 00:36:56.034 "state": "online", 00:36:56.034 "raid_level": "raid5f", 00:36:56.034 "superblock": false, 00:36:56.034 "num_base_bdevs": 4, 00:36:56.034 "num_base_bdevs_discovered": 4, 00:36:56.034 "num_base_bdevs_operational": 4, 00:36:56.034 "process": { 00:36:56.034 "type": "rebuild", 00:36:56.034 "target": "spare", 00:36:56.034 "progress": { 00:36:56.034 "blocks": 57600, 00:36:56.034 "percent": 29 00:36:56.034 } 00:36:56.034 }, 00:36:56.034 "base_bdevs_list": [ 00:36:56.034 { 00:36:56.034 "name": "spare", 00:36:56.034 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:56.034 "is_configured": true, 00:36:56.034 "data_offset": 0, 00:36:56.034 "data_size": 65536 00:36:56.034 }, 00:36:56.034 { 00:36:56.034 "name": "BaseBdev2", 00:36:56.034 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:56.034 "is_configured": true, 00:36:56.034 "data_offset": 0, 00:36:56.034 "data_size": 65536 00:36:56.034 }, 00:36:56.034 { 00:36:56.034 "name": "BaseBdev3", 00:36:56.034 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:56.034 "is_configured": true, 00:36:56.034 "data_offset": 0, 00:36:56.034 "data_size": 65536 00:36:56.034 }, 00:36:56.034 { 00:36:56.034 "name": "BaseBdev4", 00:36:56.034 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:56.034 "is_configured": true, 00:36:56.034 "data_offset": 0, 00:36:56.034 "data_size": 65536 00:36:56.034 } 00:36:56.034 ] 00:36:56.034 }' 00:36:56.034 02:06:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:56.034 02:06:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:56.034 02:06:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:56.034 02:06:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:56.034 02:06:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:57.407 "name": "raid_bdev1", 00:36:57.407 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:57.407 "strip_size_kb": 64, 00:36:57.407 "state": "online", 00:36:57.407 "raid_level": "raid5f", 00:36:57.407 "superblock": false, 00:36:57.407 "num_base_bdevs": 4, 00:36:57.407 "num_base_bdevs_discovered": 4, 00:36:57.407 "num_base_bdevs_operational": 4, 00:36:57.407 "process": { 00:36:57.407 "type": "rebuild", 00:36:57.407 "target": "spare", 00:36:57.407 "progress": { 00:36:57.407 "blocks": 84480, 00:36:57.407 "percent": 42 00:36:57.407 } 00:36:57.407 }, 00:36:57.407 "base_bdevs_list": [ 00:36:57.407 { 00:36:57.407 "name": "spare", 00:36:57.407 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:57.407 "is_configured": true, 00:36:57.407 "data_offset": 0, 00:36:57.407 "data_size": 65536 00:36:57.407 }, 00:36:57.407 { 00:36:57.407 "name": "BaseBdev2", 00:36:57.407 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:57.407 "is_configured": true, 00:36:57.407 "data_offset": 0, 00:36:57.407 "data_size": 65536 00:36:57.407 }, 00:36:57.407 { 00:36:57.407 "name": "BaseBdev3", 00:36:57.407 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:57.407 "is_configured": true, 00:36:57.407 "data_offset": 0, 00:36:57.407 "data_size": 65536 00:36:57.407 }, 00:36:57.407 { 00:36:57.407 "name": "BaseBdev4", 00:36:57.407 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:57.407 "is_configured": true, 00:36:57.407 "data_offset": 0, 00:36:57.407 "data_size": 65536 00:36:57.407 } 00:36:57.407 ] 00:36:57.407 }' 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:57.407 02:06:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:58.780 "name": "raid_bdev1", 00:36:58.780 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:36:58.780 "strip_size_kb": 64, 00:36:58.780 "state": "online", 00:36:58.780 "raid_level": "raid5f", 00:36:58.780 "superblock": false, 00:36:58.780 "num_base_bdevs": 4, 00:36:58.780 "num_base_bdevs_discovered": 4, 00:36:58.780 "num_base_bdevs_operational": 4, 00:36:58.780 "process": { 00:36:58.780 "type": "rebuild", 00:36:58.780 "target": "spare", 00:36:58.780 "progress": { 00:36:58.780 "blocks": 109440, 00:36:58.780 "percent": 55 00:36:58.780 } 00:36:58.780 }, 00:36:58.780 "base_bdevs_list": [ 00:36:58.780 { 00:36:58.780 "name": "spare", 00:36:58.780 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:36:58.780 "is_configured": true, 00:36:58.780 "data_offset": 0, 00:36:58.780 "data_size": 65536 00:36:58.780 }, 00:36:58.780 { 00:36:58.780 "name": "BaseBdev2", 00:36:58.780 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:36:58.780 "is_configured": true, 00:36:58.780 "data_offset": 0, 00:36:58.780 "data_size": 65536 00:36:58.780 }, 00:36:58.780 { 00:36:58.780 "name": "BaseBdev3", 00:36:58.780 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:36:58.780 "is_configured": true, 00:36:58.780 "data_offset": 0, 00:36:58.780 "data_size": 65536 00:36:58.780 }, 00:36:58.780 { 00:36:58.780 "name": "BaseBdev4", 00:36:58.780 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:36:58.780 "is_configured": true, 00:36:58.780 "data_offset": 0, 00:36:58.780 "data_size": 65536 00:36:58.780 } 00:36:58.780 ] 00:36:58.780 }' 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:58.780 02:06:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:59.039 02:06:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:59.039 02:06:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.972 02:06:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:00.230 "name": "raid_bdev1", 00:37:00.230 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:00.230 "strip_size_kb": 64, 00:37:00.230 "state": "online", 00:37:00.230 "raid_level": "raid5f", 00:37:00.230 "superblock": false, 00:37:00.230 "num_base_bdevs": 4, 00:37:00.230 "num_base_bdevs_discovered": 4, 00:37:00.230 "num_base_bdevs_operational": 4, 00:37:00.230 "process": { 00:37:00.230 "type": "rebuild", 00:37:00.230 "target": "spare", 00:37:00.230 "progress": { 00:37:00.230 "blocks": 136320, 00:37:00.230 "percent": 69 00:37:00.230 } 00:37:00.230 }, 00:37:00.230 "base_bdevs_list": [ 00:37:00.230 { 00:37:00.230 "name": "spare", 00:37:00.230 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:00.230 "is_configured": true, 00:37:00.230 "data_offset": 0, 00:37:00.230 "data_size": 65536 00:37:00.230 }, 00:37:00.230 { 00:37:00.230 "name": "BaseBdev2", 00:37:00.230 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:00.230 "is_configured": true, 00:37:00.230 "data_offset": 0, 00:37:00.230 "data_size": 65536 00:37:00.230 }, 00:37:00.230 { 00:37:00.230 "name": "BaseBdev3", 00:37:00.230 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:00.230 "is_configured": true, 00:37:00.230 "data_offset": 0, 00:37:00.230 "data_size": 65536 00:37:00.230 }, 00:37:00.230 { 00:37:00.230 "name": "BaseBdev4", 00:37:00.230 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:00.230 "is_configured": true, 00:37:00.230 "data_offset": 0, 00:37:00.230 "data_size": 65536 00:37:00.230 } 00:37:00.230 ] 00:37:00.230 }' 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:00.230 02:07:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:01.607 "name": "raid_bdev1", 00:37:01.607 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:01.607 "strip_size_kb": 64, 00:37:01.607 "state": "online", 00:37:01.607 "raid_level": "raid5f", 00:37:01.607 "superblock": false, 00:37:01.607 "num_base_bdevs": 4, 00:37:01.607 "num_base_bdevs_discovered": 4, 00:37:01.607 "num_base_bdevs_operational": 4, 00:37:01.607 "process": { 00:37:01.607 "type": "rebuild", 00:37:01.607 "target": "spare", 00:37:01.607 "progress": { 00:37:01.607 "blocks": 163200, 00:37:01.607 "percent": 83 00:37:01.607 } 00:37:01.607 }, 00:37:01.607 "base_bdevs_list": [ 00:37:01.607 { 00:37:01.607 "name": "spare", 00:37:01.607 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:01.607 "is_configured": true, 00:37:01.607 "data_offset": 0, 00:37:01.607 "data_size": 65536 00:37:01.607 }, 00:37:01.607 { 00:37:01.607 "name": "BaseBdev2", 00:37:01.607 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:01.607 "is_configured": true, 00:37:01.607 "data_offset": 0, 00:37:01.607 "data_size": 65536 00:37:01.607 }, 00:37:01.607 { 00:37:01.607 "name": "BaseBdev3", 00:37:01.607 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:01.607 "is_configured": true, 00:37:01.607 "data_offset": 0, 00:37:01.607 "data_size": 65536 00:37:01.607 }, 00:37:01.607 { 00:37:01.607 "name": "BaseBdev4", 00:37:01.607 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:01.607 "is_configured": true, 00:37:01.607 "data_offset": 0, 00:37:01.607 "data_size": 65536 00:37:01.607 } 00:37:01.607 ] 00:37:01.607 }' 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:01.607 02:07:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:02.982 "name": "raid_bdev1", 00:37:02.982 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:02.982 "strip_size_kb": 64, 00:37:02.982 "state": "online", 00:37:02.982 "raid_level": "raid5f", 00:37:02.982 "superblock": false, 00:37:02.982 "num_base_bdevs": 4, 00:37:02.982 "num_base_bdevs_discovered": 4, 00:37:02.982 "num_base_bdevs_operational": 4, 00:37:02.982 "process": { 00:37:02.982 "type": "rebuild", 00:37:02.982 "target": "spare", 00:37:02.982 "progress": { 00:37:02.982 "blocks": 190080, 00:37:02.982 "percent": 96 00:37:02.982 } 00:37:02.982 }, 00:37:02.982 "base_bdevs_list": [ 00:37:02.982 { 00:37:02.982 "name": "spare", 00:37:02.982 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:02.982 "is_configured": true, 00:37:02.982 "data_offset": 0, 00:37:02.982 "data_size": 65536 00:37:02.982 }, 00:37:02.982 { 00:37:02.982 "name": "BaseBdev2", 00:37:02.982 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:02.982 "is_configured": true, 00:37:02.982 "data_offset": 0, 00:37:02.982 "data_size": 65536 00:37:02.982 }, 00:37:02.982 { 00:37:02.982 "name": "BaseBdev3", 00:37:02.982 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:02.982 "is_configured": true, 00:37:02.982 "data_offset": 0, 00:37:02.982 "data_size": 65536 00:37:02.982 }, 00:37:02.982 { 00:37:02.982 "name": "BaseBdev4", 00:37:02.982 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:02.982 "is_configured": true, 00:37:02.982 "data_offset": 0, 00:37:02.982 "data_size": 65536 00:37:02.982 } 00:37:02.982 ] 00:37:02.982 }' 00:37:02.982 02:07:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:02.982 02:07:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:02.982 02:07:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:03.240 02:07:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:03.240 02:07:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:03.240 [2024-04-24 02:07:03.288520] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:03.240 [2024-04-24 02:07:03.288637] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:03.240 [2024-04-24 02:07:03.288730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.175 02:07:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:04.434 "name": "raid_bdev1", 00:37:04.434 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:04.434 "strip_size_kb": 64, 00:37:04.434 "state": "online", 00:37:04.434 "raid_level": "raid5f", 00:37:04.434 "superblock": false, 00:37:04.434 "num_base_bdevs": 4, 00:37:04.434 "num_base_bdevs_discovered": 4, 00:37:04.434 "num_base_bdevs_operational": 4, 00:37:04.434 "base_bdevs_list": [ 00:37:04.434 { 00:37:04.434 "name": "spare", 00:37:04.434 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:04.434 "is_configured": true, 00:37:04.434 "data_offset": 0, 00:37:04.434 "data_size": 65536 00:37:04.434 }, 00:37:04.434 { 00:37:04.434 "name": "BaseBdev2", 00:37:04.434 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:04.434 "is_configured": true, 00:37:04.434 "data_offset": 0, 00:37:04.434 "data_size": 65536 00:37:04.434 }, 00:37:04.434 { 00:37:04.434 "name": "BaseBdev3", 00:37:04.434 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:04.434 "is_configured": true, 00:37:04.434 "data_offset": 0, 00:37:04.434 "data_size": 65536 00:37:04.434 }, 00:37:04.434 { 00:37:04.434 "name": "BaseBdev4", 00:37:04.434 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:04.434 "is_configured": true, 00:37:04.434 "data_offset": 0, 00:37:04.434 "data_size": 65536 00:37:04.434 } 00:37:04.434 ] 00:37:04.434 }' 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@660 -- # break 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.434 02:07:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:05.013 "name": "raid_bdev1", 00:37:05.013 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:05.013 "strip_size_kb": 64, 00:37:05.013 "state": "online", 00:37:05.013 "raid_level": "raid5f", 00:37:05.013 "superblock": false, 00:37:05.013 "num_base_bdevs": 4, 00:37:05.013 "num_base_bdevs_discovered": 4, 00:37:05.013 "num_base_bdevs_operational": 4, 00:37:05.013 "base_bdevs_list": [ 00:37:05.013 { 00:37:05.013 "name": "spare", 00:37:05.013 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:05.013 "is_configured": true, 00:37:05.013 "data_offset": 0, 00:37:05.013 "data_size": 65536 00:37:05.013 }, 00:37:05.013 { 00:37:05.013 "name": "BaseBdev2", 00:37:05.013 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:05.013 "is_configured": true, 00:37:05.013 "data_offset": 0, 00:37:05.013 "data_size": 65536 00:37:05.013 }, 00:37:05.013 { 00:37:05.013 "name": "BaseBdev3", 00:37:05.013 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:05.013 "is_configured": true, 00:37:05.013 "data_offset": 0, 00:37:05.013 "data_size": 65536 00:37:05.013 }, 00:37:05.013 { 00:37:05.013 "name": "BaseBdev4", 00:37:05.013 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:05.013 "is_configured": true, 00:37:05.013 "data_offset": 0, 00:37:05.013 "data_size": 65536 00:37:05.013 } 00:37:05.013 ] 00:37:05.013 }' 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.013 02:07:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.271 02:07:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:05.271 "name": "raid_bdev1", 00:37:05.271 "uuid": "3a0014ee-b272-4d54-a524-39268b5bf4b2", 00:37:05.271 "strip_size_kb": 64, 00:37:05.271 "state": "online", 00:37:05.271 "raid_level": "raid5f", 00:37:05.271 "superblock": false, 00:37:05.271 "num_base_bdevs": 4, 00:37:05.271 "num_base_bdevs_discovered": 4, 00:37:05.271 "num_base_bdevs_operational": 4, 00:37:05.271 "base_bdevs_list": [ 00:37:05.271 { 00:37:05.271 "name": "spare", 00:37:05.271 "uuid": "5f8c15ef-5eb4-55c1-ad1a-52f05b3e213b", 00:37:05.271 "is_configured": true, 00:37:05.271 "data_offset": 0, 00:37:05.271 "data_size": 65536 00:37:05.271 }, 00:37:05.271 { 00:37:05.271 "name": "BaseBdev2", 00:37:05.271 "uuid": "3bf9b5c4-2ff9-4bc4-97e6-e413347c9d83", 00:37:05.271 "is_configured": true, 00:37:05.271 "data_offset": 0, 00:37:05.271 "data_size": 65536 00:37:05.271 }, 00:37:05.271 { 00:37:05.271 "name": "BaseBdev3", 00:37:05.271 "uuid": "146a82c3-ce24-4314-afce-7c5482bab700", 00:37:05.271 "is_configured": true, 00:37:05.271 "data_offset": 0, 00:37:05.271 "data_size": 65536 00:37:05.271 }, 00:37:05.271 { 00:37:05.271 "name": "BaseBdev4", 00:37:05.271 "uuid": "f4d3a36b-7867-41be-b95f-36bff27636b9", 00:37:05.271 "is_configured": true, 00:37:05.271 "data_offset": 0, 00:37:05.271 "data_size": 65536 00:37:05.271 } 00:37:05.271 ] 00:37:05.271 }' 00:37:05.271 02:07:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:05.271 02:07:05 -- common/autotest_common.sh@10 -- # set +x 00:37:05.837 02:07:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:06.096 [2024-04-24 02:07:06.146294] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:06.096 [2024-04-24 02:07:06.146344] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:06.096 [2024-04-24 02:07:06.146451] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:06.096 [2024-04-24 02:07:06.146554] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:06.096 [2024-04-24 02:07:06.146568] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:37:06.096 02:07:06 -- bdev/bdev_raid.sh@671 -- # jq length 00:37:06.096 02:07:06 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.666 02:07:06 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:37:06.666 02:07:06 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:37:06.666 02:07:06 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@12 -- # local i 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:06.666 02:07:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:06.923 /dev/nbd0 00:37:06.923 02:07:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:06.923 02:07:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:06.923 02:07:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:06.923 02:07:06 -- common/autotest_common.sh@855 -- # local i 00:37:06.923 02:07:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:06.923 02:07:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:06.923 02:07:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:06.923 02:07:06 -- common/autotest_common.sh@859 -- # break 00:37:06.923 02:07:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:06.923 02:07:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:06.923 02:07:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:06.923 1+0 records in 00:37:06.923 1+0 records out 00:37:06.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058447 s, 7.0 MB/s 00:37:06.923 02:07:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.923 02:07:06 -- common/autotest_common.sh@872 -- # size=4096 00:37:06.923 02:07:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.923 02:07:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:06.923 02:07:06 -- common/autotest_common.sh@875 -- # return 0 00:37:06.923 02:07:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:06.923 02:07:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:06.923 02:07:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:07.179 /dev/nbd1 00:37:07.179 02:07:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:07.179 02:07:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:07.179 02:07:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:37:07.179 02:07:07 -- common/autotest_common.sh@855 -- # local i 00:37:07.179 02:07:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:07.179 02:07:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:07.179 02:07:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:37:07.179 02:07:07 -- common/autotest_common.sh@859 -- # break 00:37:07.179 02:07:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:07.179 02:07:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:07.179 02:07:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:07.179 1+0 records in 00:37:07.179 1+0 records out 00:37:07.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697459 s, 5.9 MB/s 00:37:07.179 02:07:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.179 02:07:07 -- common/autotest_common.sh@872 -- # size=4096 00:37:07.179 02:07:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.179 02:07:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:07.179 02:07:07 -- common/autotest_common.sh@875 -- # return 0 00:37:07.179 02:07:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:07.179 02:07:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:07.179 02:07:07 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:07.437 02:07:07 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@51 -- # local i 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:07.437 02:07:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@41 -- # break 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@45 -- # return 0 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:07.695 02:07:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:07.953 02:07:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@41 -- # break 00:37:08.211 02:07:08 -- bdev/nbd_common.sh@45 -- # return 0 00:37:08.211 02:07:08 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:37:08.211 02:07:08 -- bdev/bdev_raid.sh@709 -- # killprocess 140241 00:37:08.211 02:07:08 -- common/autotest_common.sh@936 -- # '[' -z 140241 ']' 00:37:08.211 02:07:08 -- common/autotest_common.sh@940 -- # kill -0 140241 00:37:08.211 02:07:08 -- common/autotest_common.sh@941 -- # uname 00:37:08.211 02:07:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:08.211 02:07:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140241 00:37:08.211 02:07:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:08.211 killing process with pid 140241 00:37:08.211 02:07:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:08.211 02:07:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140241' 00:37:08.211 02:07:08 -- common/autotest_common.sh@955 -- # kill 140241 00:37:08.211 Received shutdown signal, test time was about 60.000000 seconds 00:37:08.211 00:37:08.211 Latency(us) 00:37:08.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.211 =================================================================================================================== 00:37:08.211 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:08.211 02:07:08 -- common/autotest_common.sh@960 -- # wait 140241 00:37:08.211 [2024-04-24 02:07:08.076273] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:08.881 [2024-04-24 02:07:08.668690] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@711 -- # return 0 00:37:10.252 00:37:10.252 real 0m28.218s 00:37:10.252 user 0m40.937s 00:37:10.252 sys 0m3.634s 00:37:10.252 02:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:10.252 02:07:10 -- common/autotest_common.sh@10 -- # set +x 00:37:10.252 ************************************ 00:37:10.252 END TEST raid5f_rebuild_test 00:37:10.252 ************************************ 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:37:10.252 02:07:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:37:10.252 02:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:10.252 02:07:10 -- common/autotest_common.sh@10 -- # set +x 00:37:10.252 ************************************ 00:37:10.252 START TEST raid5f_rebuild_test_sb 00:37:10.252 ************************************ 00:37:10.252 02:07:10 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 true false 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=140893 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140893 /var/tmp/spdk-raid.sock 00:37:10.252 02:07:10 -- common/autotest_common.sh@817 -- # '[' -z 140893 ']' 00:37:10.252 02:07:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:10.252 02:07:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:10.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:10.252 02:07:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:10.252 02:07:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:10.252 02:07:10 -- common/autotest_common.sh@10 -- # set +x 00:37:10.252 02:07:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:10.252 [2024-04-24 02:07:10.307465] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:10.252 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:10.252 Zero copy mechanism will not be used. 00:37:10.252 [2024-04-24 02:07:10.307764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140893 ] 00:37:10.511 [2024-04-24 02:07:10.491709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.769 [2024-04-24 02:07:10.705516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.027 [2024-04-24 02:07:10.956856] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:11.284 02:07:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:11.284 02:07:11 -- common/autotest_common.sh@850 -- # return 0 00:37:11.284 02:07:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:11.284 02:07:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:11.284 02:07:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:11.542 BaseBdev1_malloc 00:37:11.542 02:07:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:11.800 [2024-04-24 02:07:11.769662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:11.800 [2024-04-24 02:07:11.769830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:11.800 [2024-04-24 02:07:11.769885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:37:11.800 [2024-04-24 02:07:11.769934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:11.800 [2024-04-24 02:07:11.772640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:11.800 [2024-04-24 02:07:11.772696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:11.800 BaseBdev1 00:37:11.800 02:07:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:11.800 02:07:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:11.800 02:07:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:12.058 BaseBdev2_malloc 00:37:12.058 02:07:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:12.317 [2024-04-24 02:07:12.394269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:12.317 [2024-04-24 02:07:12.394368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.317 [2024-04-24 02:07:12.394414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:12.317 [2024-04-24 02:07:12.394475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.317 [2024-04-24 02:07:12.397205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.317 [2024-04-24 02:07:12.397269] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:12.317 BaseBdev2 00:37:12.574 02:07:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:12.574 02:07:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:12.574 02:07:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:12.832 BaseBdev3_malloc 00:37:12.832 02:07:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:13.090 [2024-04-24 02:07:13.025699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:13.090 [2024-04-24 02:07:13.025818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:13.090 [2024-04-24 02:07:13.025865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:37:13.090 [2024-04-24 02:07:13.025913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:13.090 [2024-04-24 02:07:13.028405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:13.090 [2024-04-24 02:07:13.028467] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:13.090 BaseBdev3 00:37:13.090 02:07:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:13.090 02:07:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:13.090 02:07:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:13.349 BaseBdev4_malloc 00:37:13.349 02:07:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:37:13.606 [2024-04-24 02:07:13.485340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:37:13.606 [2024-04-24 02:07:13.485438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:13.606 [2024-04-24 02:07:13.485477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:13.606 [2024-04-24 02:07:13.485517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:13.606 [2024-04-24 02:07:13.488116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:13.606 [2024-04-24 02:07:13.488189] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:13.606 BaseBdev4 00:37:13.606 02:07:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:37:13.864 spare_malloc 00:37:13.864 02:07:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:14.123 spare_delay 00:37:14.123 02:07:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:14.381 [2024-04-24 02:07:14.313550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:14.381 [2024-04-24 02:07:14.313663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:14.381 [2024-04-24 02:07:14.313701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:14.381 [2024-04-24 02:07:14.313755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:14.381 [2024-04-24 02:07:14.316442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:14.381 [2024-04-24 02:07:14.316520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:14.381 spare 00:37:14.381 02:07:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:37:14.640 [2024-04-24 02:07:14.593649] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:14.640 [2024-04-24 02:07:14.596012] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:14.640 [2024-04-24 02:07:14.596119] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:14.640 [2024-04-24 02:07:14.596187] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:14.640 [2024-04-24 02:07:14.596412] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:37:14.640 [2024-04-24 02:07:14.596422] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:14.640 [2024-04-24 02:07:14.596572] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:14.640 [2024-04-24 02:07:14.606463] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:37:14.640 [2024-04-24 02:07:14.606512] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:37:14.640 [2024-04-24 02:07:14.606797] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.640 02:07:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.899 02:07:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:14.899 "name": "raid_bdev1", 00:37:14.899 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:14.899 "strip_size_kb": 64, 00:37:14.899 "state": "online", 00:37:14.899 "raid_level": "raid5f", 00:37:14.899 "superblock": true, 00:37:14.899 "num_base_bdevs": 4, 00:37:14.899 "num_base_bdevs_discovered": 4, 00:37:14.899 "num_base_bdevs_operational": 4, 00:37:14.899 "base_bdevs_list": [ 00:37:14.899 { 00:37:14.899 "name": "BaseBdev1", 00:37:14.899 "uuid": "ea9727a3-d518-54e4-96c8-bf94ff472496", 00:37:14.899 "is_configured": true, 00:37:14.899 "data_offset": 2048, 00:37:14.899 "data_size": 63488 00:37:14.899 }, 00:37:14.899 { 00:37:14.899 "name": "BaseBdev2", 00:37:14.899 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:14.899 "is_configured": true, 00:37:14.899 "data_offset": 2048, 00:37:14.899 "data_size": 63488 00:37:14.899 }, 00:37:14.899 { 00:37:14.899 "name": "BaseBdev3", 00:37:14.899 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:14.899 "is_configured": true, 00:37:14.899 "data_offset": 2048, 00:37:14.899 "data_size": 63488 00:37:14.899 }, 00:37:14.899 { 00:37:14.899 "name": "BaseBdev4", 00:37:14.899 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:14.899 "is_configured": true, 00:37:14.899 "data_offset": 2048, 00:37:14.899 "data_size": 63488 00:37:14.899 } 00:37:14.899 ] 00:37:14.899 }' 00:37:14.899 02:07:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:14.899 02:07:14 -- common/autotest_common.sh@10 -- # set +x 00:37:15.466 02:07:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:15.466 02:07:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:37:16.032 [2024-04-24 02:07:15.825329] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:16.032 02:07:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:37:16.032 02:07:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.032 02:07:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:16.289 02:07:16 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:37:16.289 02:07:16 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:37:16.289 02:07:16 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:37:16.289 02:07:16 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@12 -- # local i 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:16.289 02:07:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:16.548 [2024-04-24 02:07:16.389370] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:16.548 /dev/nbd0 00:37:16.548 02:07:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:16.548 02:07:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:16.548 02:07:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:16.548 02:07:16 -- common/autotest_common.sh@855 -- # local i 00:37:16.548 02:07:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:16.548 02:07:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:16.548 02:07:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:16.548 02:07:16 -- common/autotest_common.sh@859 -- # break 00:37:16.548 02:07:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:16.548 02:07:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:16.548 02:07:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:16.548 1+0 records in 00:37:16.548 1+0 records out 00:37:16.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486818 s, 8.4 MB/s 00:37:16.548 02:07:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.548 02:07:16 -- common/autotest_common.sh@872 -- # size=4096 00:37:16.548 02:07:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.548 02:07:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:16.548 02:07:16 -- common/autotest_common.sh@875 -- # return 0 00:37:16.548 02:07:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:16.548 02:07:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:16.548 02:07:16 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:37:16.548 02:07:16 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:37:16.548 02:07:16 -- bdev/bdev_raid.sh@582 -- # echo 192 00:37:16.548 02:07:16 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:37:17.113 496+0 records in 00:37:17.113 496+0 records out 00:37:17.113 97517568 bytes (98 MB, 93 MiB) copied, 0.652541 s, 149 MB/s 00:37:17.113 02:07:17 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@51 -- # local i 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:17.113 02:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:17.371 [2024-04-24 02:07:17.414799] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@41 -- # break 00:37:17.371 02:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:37:17.371 02:07:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:17.629 [2024-04-24 02:07:17.609783] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.629 02:07:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.887 02:07:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:17.887 "name": "raid_bdev1", 00:37:17.887 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:17.887 "strip_size_kb": 64, 00:37:17.887 "state": "online", 00:37:17.887 "raid_level": "raid5f", 00:37:17.887 "superblock": true, 00:37:17.887 "num_base_bdevs": 4, 00:37:17.887 "num_base_bdevs_discovered": 3, 00:37:17.887 "num_base_bdevs_operational": 3, 00:37:17.887 "base_bdevs_list": [ 00:37:17.887 { 00:37:17.887 "name": null, 00:37:17.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.887 "is_configured": false, 00:37:17.887 "data_offset": 2048, 00:37:17.887 "data_size": 63488 00:37:17.887 }, 00:37:17.887 { 00:37:17.887 "name": "BaseBdev2", 00:37:17.887 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:17.887 "is_configured": true, 00:37:17.887 "data_offset": 2048, 00:37:17.887 "data_size": 63488 00:37:17.887 }, 00:37:17.887 { 00:37:17.887 "name": "BaseBdev3", 00:37:17.887 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:17.887 "is_configured": true, 00:37:17.887 "data_offset": 2048, 00:37:17.887 "data_size": 63488 00:37:17.887 }, 00:37:17.887 { 00:37:17.887 "name": "BaseBdev4", 00:37:17.887 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:17.887 "is_configured": true, 00:37:17.887 "data_offset": 2048, 00:37:17.887 "data_size": 63488 00:37:17.887 } 00:37:17.887 ] 00:37:17.887 }' 00:37:17.887 02:07:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:17.887 02:07:17 -- common/autotest_common.sh@10 -- # set +x 00:37:18.453 02:07:18 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:18.711 [2024-04-24 02:07:18.626026] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:18.711 [2024-04-24 02:07:18.626093] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:18.711 [2024-04-24 02:07:18.646562] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:37:18.711 [2024-04-24 02:07:18.658780] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:18.711 02:07:18 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.677 02:07:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.935 02:07:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:19.935 "name": "raid_bdev1", 00:37:19.935 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:19.935 "strip_size_kb": 64, 00:37:19.935 "state": "online", 00:37:19.935 "raid_level": "raid5f", 00:37:19.935 "superblock": true, 00:37:19.935 "num_base_bdevs": 4, 00:37:19.935 "num_base_bdevs_discovered": 4, 00:37:19.935 "num_base_bdevs_operational": 4, 00:37:19.935 "process": { 00:37:19.935 "type": "rebuild", 00:37:19.935 "target": "spare", 00:37:19.935 "progress": { 00:37:19.935 "blocks": 23040, 00:37:19.935 "percent": 12 00:37:19.935 } 00:37:19.935 }, 00:37:19.935 "base_bdevs_list": [ 00:37:19.935 { 00:37:19.935 "name": "spare", 00:37:19.935 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:19.935 "is_configured": true, 00:37:19.935 "data_offset": 2048, 00:37:19.935 "data_size": 63488 00:37:19.935 }, 00:37:19.935 { 00:37:19.935 "name": "BaseBdev2", 00:37:19.935 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:19.935 "is_configured": true, 00:37:19.935 "data_offset": 2048, 00:37:19.935 "data_size": 63488 00:37:19.935 }, 00:37:19.935 { 00:37:19.935 "name": "BaseBdev3", 00:37:19.935 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:19.935 "is_configured": true, 00:37:19.935 "data_offset": 2048, 00:37:19.935 "data_size": 63488 00:37:19.935 }, 00:37:19.935 { 00:37:19.935 "name": "BaseBdev4", 00:37:19.935 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:19.935 "is_configured": true, 00:37:19.935 "data_offset": 2048, 00:37:19.935 "data_size": 63488 00:37:19.935 } 00:37:19.935 ] 00:37:19.935 }' 00:37:19.935 02:07:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:19.935 02:07:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:19.935 02:07:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:20.193 02:07:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:20.193 02:07:20 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:20.451 [2024-04-24 02:07:20.280307] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:20.452 [2024-04-24 02:07:20.374068] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:20.452 [2024-04-24 02:07:20.374158] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.452 02:07:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.710 02:07:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:20.710 "name": "raid_bdev1", 00:37:20.710 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:20.710 "strip_size_kb": 64, 00:37:20.710 "state": "online", 00:37:20.710 "raid_level": "raid5f", 00:37:20.710 "superblock": true, 00:37:20.710 "num_base_bdevs": 4, 00:37:20.710 "num_base_bdevs_discovered": 3, 00:37:20.710 "num_base_bdevs_operational": 3, 00:37:20.710 "base_bdevs_list": [ 00:37:20.710 { 00:37:20.710 "name": null, 00:37:20.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.710 "is_configured": false, 00:37:20.710 "data_offset": 2048, 00:37:20.710 "data_size": 63488 00:37:20.710 }, 00:37:20.710 { 00:37:20.710 "name": "BaseBdev2", 00:37:20.710 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:20.710 "is_configured": true, 00:37:20.710 "data_offset": 2048, 00:37:20.710 "data_size": 63488 00:37:20.710 }, 00:37:20.710 { 00:37:20.710 "name": "BaseBdev3", 00:37:20.710 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:20.710 "is_configured": true, 00:37:20.710 "data_offset": 2048, 00:37:20.710 "data_size": 63488 00:37:20.710 }, 00:37:20.710 { 00:37:20.710 "name": "BaseBdev4", 00:37:20.710 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:20.710 "is_configured": true, 00:37:20.710 "data_offset": 2048, 00:37:20.710 "data_size": 63488 00:37:20.710 } 00:37:20.710 ] 00:37:20.710 }' 00:37:20.710 02:07:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:20.710 02:07:20 -- common/autotest_common.sh@10 -- # set +x 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.279 02:07:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:21.537 "name": "raid_bdev1", 00:37:21.537 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:21.537 "strip_size_kb": 64, 00:37:21.537 "state": "online", 00:37:21.537 "raid_level": "raid5f", 00:37:21.537 "superblock": true, 00:37:21.537 "num_base_bdevs": 4, 00:37:21.537 "num_base_bdevs_discovered": 3, 00:37:21.537 "num_base_bdevs_operational": 3, 00:37:21.537 "base_bdevs_list": [ 00:37:21.537 { 00:37:21.537 "name": null, 00:37:21.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.537 "is_configured": false, 00:37:21.537 "data_offset": 2048, 00:37:21.537 "data_size": 63488 00:37:21.537 }, 00:37:21.537 { 00:37:21.537 "name": "BaseBdev2", 00:37:21.537 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:21.537 "is_configured": true, 00:37:21.537 "data_offset": 2048, 00:37:21.537 "data_size": 63488 00:37:21.537 }, 00:37:21.537 { 00:37:21.537 "name": "BaseBdev3", 00:37:21.537 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:21.537 "is_configured": true, 00:37:21.537 "data_offset": 2048, 00:37:21.537 "data_size": 63488 00:37:21.537 }, 00:37:21.537 { 00:37:21.537 "name": "BaseBdev4", 00:37:21.537 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:21.537 "is_configured": true, 00:37:21.537 "data_offset": 2048, 00:37:21.537 "data_size": 63488 00:37:21.537 } 00:37:21.537 ] 00:37:21.537 }' 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:21.537 02:07:21 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:21.795 [2024-04-24 02:07:21.851188] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:21.795 [2024-04-24 02:07:21.851237] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:21.795 [2024-04-24 02:07:21.867753] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:37:21.795 [2024-04-24 02:07:21.879164] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:22.053 02:07:21 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.989 02:07:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:23.249 "name": "raid_bdev1", 00:37:23.249 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:23.249 "strip_size_kb": 64, 00:37:23.249 "state": "online", 00:37:23.249 "raid_level": "raid5f", 00:37:23.249 "superblock": true, 00:37:23.249 "num_base_bdevs": 4, 00:37:23.249 "num_base_bdevs_discovered": 4, 00:37:23.249 "num_base_bdevs_operational": 4, 00:37:23.249 "process": { 00:37:23.249 "type": "rebuild", 00:37:23.249 "target": "spare", 00:37:23.249 "progress": { 00:37:23.249 "blocks": 23040, 00:37:23.249 "percent": 12 00:37:23.249 } 00:37:23.249 }, 00:37:23.249 "base_bdevs_list": [ 00:37:23.249 { 00:37:23.249 "name": "spare", 00:37:23.249 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:23.249 "is_configured": true, 00:37:23.249 "data_offset": 2048, 00:37:23.249 "data_size": 63488 00:37:23.249 }, 00:37:23.249 { 00:37:23.249 "name": "BaseBdev2", 00:37:23.249 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:23.249 "is_configured": true, 00:37:23.249 "data_offset": 2048, 00:37:23.249 "data_size": 63488 00:37:23.249 }, 00:37:23.249 { 00:37:23.249 "name": "BaseBdev3", 00:37:23.249 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:23.249 "is_configured": true, 00:37:23.249 "data_offset": 2048, 00:37:23.249 "data_size": 63488 00:37:23.249 }, 00:37:23.249 { 00:37:23.249 "name": "BaseBdev4", 00:37:23.249 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:23.249 "is_configured": true, 00:37:23.249 "data_offset": 2048, 00:37:23.249 "data_size": 63488 00:37:23.249 } 00:37:23.249 ] 00:37:23.249 }' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:37:23.249 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@657 -- # local timeout=818 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.249 02:07:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:23.508 "name": "raid_bdev1", 00:37:23.508 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:23.508 "strip_size_kb": 64, 00:37:23.508 "state": "online", 00:37:23.508 "raid_level": "raid5f", 00:37:23.508 "superblock": true, 00:37:23.508 "num_base_bdevs": 4, 00:37:23.508 "num_base_bdevs_discovered": 4, 00:37:23.508 "num_base_bdevs_operational": 4, 00:37:23.508 "process": { 00:37:23.508 "type": "rebuild", 00:37:23.508 "target": "spare", 00:37:23.508 "progress": { 00:37:23.508 "blocks": 28800, 00:37:23.508 "percent": 15 00:37:23.508 } 00:37:23.508 }, 00:37:23.508 "base_bdevs_list": [ 00:37:23.508 { 00:37:23.508 "name": "spare", 00:37:23.508 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:23.508 "is_configured": true, 00:37:23.508 "data_offset": 2048, 00:37:23.508 "data_size": 63488 00:37:23.508 }, 00:37:23.508 { 00:37:23.508 "name": "BaseBdev2", 00:37:23.508 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:23.508 "is_configured": true, 00:37:23.508 "data_offset": 2048, 00:37:23.508 "data_size": 63488 00:37:23.508 }, 00:37:23.508 { 00:37:23.508 "name": "BaseBdev3", 00:37:23.508 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:23.508 "is_configured": true, 00:37:23.508 "data_offset": 2048, 00:37:23.508 "data_size": 63488 00:37:23.508 }, 00:37:23.508 { 00:37:23.508 "name": "BaseBdev4", 00:37:23.508 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:23.508 "is_configured": true, 00:37:23.508 "data_offset": 2048, 00:37:23.508 "data_size": 63488 00:37:23.508 } 00:37:23.508 ] 00:37:23.508 }' 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:23.508 02:07:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:24.885 "name": "raid_bdev1", 00:37:24.885 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:24.885 "strip_size_kb": 64, 00:37:24.885 "state": "online", 00:37:24.885 "raid_level": "raid5f", 00:37:24.885 "superblock": true, 00:37:24.885 "num_base_bdevs": 4, 00:37:24.885 "num_base_bdevs_discovered": 4, 00:37:24.885 "num_base_bdevs_operational": 4, 00:37:24.885 "process": { 00:37:24.885 "type": "rebuild", 00:37:24.885 "target": "spare", 00:37:24.885 "progress": { 00:37:24.885 "blocks": 53760, 00:37:24.885 "percent": 28 00:37:24.885 } 00:37:24.885 }, 00:37:24.885 "base_bdevs_list": [ 00:37:24.885 { 00:37:24.885 "name": "spare", 00:37:24.885 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:24.885 "is_configured": true, 00:37:24.885 "data_offset": 2048, 00:37:24.885 "data_size": 63488 00:37:24.885 }, 00:37:24.885 { 00:37:24.885 "name": "BaseBdev2", 00:37:24.885 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:24.885 "is_configured": true, 00:37:24.885 "data_offset": 2048, 00:37:24.885 "data_size": 63488 00:37:24.885 }, 00:37:24.885 { 00:37:24.885 "name": "BaseBdev3", 00:37:24.885 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:24.885 "is_configured": true, 00:37:24.885 "data_offset": 2048, 00:37:24.885 "data_size": 63488 00:37:24.885 }, 00:37:24.885 { 00:37:24.885 "name": "BaseBdev4", 00:37:24.885 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:24.885 "is_configured": true, 00:37:24.885 "data_offset": 2048, 00:37:24.885 "data_size": 63488 00:37:24.885 } 00:37:24.885 ] 00:37:24.885 }' 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:24.885 02:07:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.264 02:07:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:26.264 "name": "raid_bdev1", 00:37:26.264 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:26.264 "strip_size_kb": 64, 00:37:26.264 "state": "online", 00:37:26.264 "raid_level": "raid5f", 00:37:26.264 "superblock": true, 00:37:26.264 "num_base_bdevs": 4, 00:37:26.264 "num_base_bdevs_discovered": 4, 00:37:26.264 "num_base_bdevs_operational": 4, 00:37:26.264 "process": { 00:37:26.264 "type": "rebuild", 00:37:26.264 "target": "spare", 00:37:26.264 "progress": { 00:37:26.264 "blocks": 80640, 00:37:26.264 "percent": 42 00:37:26.264 } 00:37:26.264 }, 00:37:26.264 "base_bdevs_list": [ 00:37:26.264 { 00:37:26.264 "name": "spare", 00:37:26.264 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:26.264 "is_configured": true, 00:37:26.264 "data_offset": 2048, 00:37:26.264 "data_size": 63488 00:37:26.264 }, 00:37:26.264 { 00:37:26.264 "name": "BaseBdev2", 00:37:26.264 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:26.264 "is_configured": true, 00:37:26.264 "data_offset": 2048, 00:37:26.264 "data_size": 63488 00:37:26.264 }, 00:37:26.264 { 00:37:26.264 "name": "BaseBdev3", 00:37:26.264 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:26.264 "is_configured": true, 00:37:26.264 "data_offset": 2048, 00:37:26.264 "data_size": 63488 00:37:26.264 }, 00:37:26.264 { 00:37:26.264 "name": "BaseBdev4", 00:37:26.264 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:26.264 "is_configured": true, 00:37:26.264 "data_offset": 2048, 00:37:26.264 "data_size": 63488 00:37:26.264 } 00:37:26.264 ] 00:37:26.264 }' 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:26.264 02:07:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:27.641 "name": "raid_bdev1", 00:37:27.641 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:27.641 "strip_size_kb": 64, 00:37:27.641 "state": "online", 00:37:27.641 "raid_level": "raid5f", 00:37:27.641 "superblock": true, 00:37:27.641 "num_base_bdevs": 4, 00:37:27.641 "num_base_bdevs_discovered": 4, 00:37:27.641 "num_base_bdevs_operational": 4, 00:37:27.641 "process": { 00:37:27.641 "type": "rebuild", 00:37:27.641 "target": "spare", 00:37:27.641 "progress": { 00:37:27.641 "blocks": 107520, 00:37:27.641 "percent": 56 00:37:27.641 } 00:37:27.641 }, 00:37:27.641 "base_bdevs_list": [ 00:37:27.641 { 00:37:27.641 "name": "spare", 00:37:27.641 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:27.641 "is_configured": true, 00:37:27.641 "data_offset": 2048, 00:37:27.641 "data_size": 63488 00:37:27.641 }, 00:37:27.641 { 00:37:27.641 "name": "BaseBdev2", 00:37:27.641 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:27.641 "is_configured": true, 00:37:27.641 "data_offset": 2048, 00:37:27.641 "data_size": 63488 00:37:27.641 }, 00:37:27.641 { 00:37:27.641 "name": "BaseBdev3", 00:37:27.641 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:27.641 "is_configured": true, 00:37:27.641 "data_offset": 2048, 00:37:27.641 "data_size": 63488 00:37:27.641 }, 00:37:27.641 { 00:37:27.641 "name": "BaseBdev4", 00:37:27.641 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:27.641 "is_configured": true, 00:37:27.641 "data_offset": 2048, 00:37:27.641 "data_size": 63488 00:37:27.641 } 00:37:27.641 ] 00:37:27.641 }' 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:27.641 02:07:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:29.017 "name": "raid_bdev1", 00:37:29.017 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:29.017 "strip_size_kb": 64, 00:37:29.017 "state": "online", 00:37:29.017 "raid_level": "raid5f", 00:37:29.017 "superblock": true, 00:37:29.017 "num_base_bdevs": 4, 00:37:29.017 "num_base_bdevs_discovered": 4, 00:37:29.017 "num_base_bdevs_operational": 4, 00:37:29.017 "process": { 00:37:29.017 "type": "rebuild", 00:37:29.017 "target": "spare", 00:37:29.017 "progress": { 00:37:29.017 "blocks": 134400, 00:37:29.017 "percent": 70 00:37:29.017 } 00:37:29.017 }, 00:37:29.017 "base_bdevs_list": [ 00:37:29.017 { 00:37:29.017 "name": "spare", 00:37:29.017 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:29.017 "is_configured": true, 00:37:29.017 "data_offset": 2048, 00:37:29.017 "data_size": 63488 00:37:29.017 }, 00:37:29.017 { 00:37:29.017 "name": "BaseBdev2", 00:37:29.017 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:29.017 "is_configured": true, 00:37:29.017 "data_offset": 2048, 00:37:29.017 "data_size": 63488 00:37:29.017 }, 00:37:29.017 { 00:37:29.017 "name": "BaseBdev3", 00:37:29.017 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:29.017 "is_configured": true, 00:37:29.017 "data_offset": 2048, 00:37:29.017 "data_size": 63488 00:37:29.017 }, 00:37:29.017 { 00:37:29.017 "name": "BaseBdev4", 00:37:29.017 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:29.017 "is_configured": true, 00:37:29.017 "data_offset": 2048, 00:37:29.017 "data_size": 63488 00:37:29.017 } 00:37:29.017 ] 00:37:29.017 }' 00:37:29.017 02:07:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:29.017 02:07:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:29.017 02:07:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:29.017 02:07:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:29.017 02:07:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:30.391 "name": "raid_bdev1", 00:37:30.391 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:30.391 "strip_size_kb": 64, 00:37:30.391 "state": "online", 00:37:30.391 "raid_level": "raid5f", 00:37:30.391 "superblock": true, 00:37:30.391 "num_base_bdevs": 4, 00:37:30.391 "num_base_bdevs_discovered": 4, 00:37:30.391 "num_base_bdevs_operational": 4, 00:37:30.391 "process": { 00:37:30.391 "type": "rebuild", 00:37:30.391 "target": "spare", 00:37:30.391 "progress": { 00:37:30.391 "blocks": 159360, 00:37:30.391 "percent": 83 00:37:30.391 } 00:37:30.391 }, 00:37:30.391 "base_bdevs_list": [ 00:37:30.391 { 00:37:30.391 "name": "spare", 00:37:30.391 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:30.391 "is_configured": true, 00:37:30.391 "data_offset": 2048, 00:37:30.391 "data_size": 63488 00:37:30.391 }, 00:37:30.391 { 00:37:30.391 "name": "BaseBdev2", 00:37:30.391 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:30.391 "is_configured": true, 00:37:30.391 "data_offset": 2048, 00:37:30.391 "data_size": 63488 00:37:30.391 }, 00:37:30.391 { 00:37:30.391 "name": "BaseBdev3", 00:37:30.391 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:30.391 "is_configured": true, 00:37:30.391 "data_offset": 2048, 00:37:30.391 "data_size": 63488 00:37:30.391 }, 00:37:30.391 { 00:37:30.391 "name": "BaseBdev4", 00:37:30.391 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:30.391 "is_configured": true, 00:37:30.391 "data_offset": 2048, 00:37:30.391 "data_size": 63488 00:37:30.391 } 00:37:30.391 ] 00:37:30.391 }' 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:30.391 02:07:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:31.764 "name": "raid_bdev1", 00:37:31.764 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:31.764 "strip_size_kb": 64, 00:37:31.764 "state": "online", 00:37:31.764 "raid_level": "raid5f", 00:37:31.764 "superblock": true, 00:37:31.764 "num_base_bdevs": 4, 00:37:31.764 "num_base_bdevs_discovered": 4, 00:37:31.764 "num_base_bdevs_operational": 4, 00:37:31.764 "process": { 00:37:31.764 "type": "rebuild", 00:37:31.764 "target": "spare", 00:37:31.764 "progress": { 00:37:31.764 "blocks": 186240, 00:37:31.764 "percent": 97 00:37:31.764 } 00:37:31.764 }, 00:37:31.764 "base_bdevs_list": [ 00:37:31.764 { 00:37:31.764 "name": "spare", 00:37:31.764 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:31.764 "is_configured": true, 00:37:31.764 "data_offset": 2048, 00:37:31.764 "data_size": 63488 00:37:31.764 }, 00:37:31.764 { 00:37:31.764 "name": "BaseBdev2", 00:37:31.764 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:31.764 "is_configured": true, 00:37:31.764 "data_offset": 2048, 00:37:31.764 "data_size": 63488 00:37:31.764 }, 00:37:31.764 { 00:37:31.764 "name": "BaseBdev3", 00:37:31.764 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:31.764 "is_configured": true, 00:37:31.764 "data_offset": 2048, 00:37:31.764 "data_size": 63488 00:37:31.764 }, 00:37:31.764 { 00:37:31.764 "name": "BaseBdev4", 00:37:31.764 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:31.764 "is_configured": true, 00:37:31.764 "data_offset": 2048, 00:37:31.764 "data_size": 63488 00:37:31.764 } 00:37:31.764 ] 00:37:31.764 }' 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:31.764 02:07:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:32.022 [2024-04-24 02:07:31.967259] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:32.022 [2024-04-24 02:07:31.967359] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:32.022 [2024-04-24 02:07:31.967562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.981 02:07:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:33.240 "name": "raid_bdev1", 00:37:33.240 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:33.240 "strip_size_kb": 64, 00:37:33.240 "state": "online", 00:37:33.240 "raid_level": "raid5f", 00:37:33.240 "superblock": true, 00:37:33.240 "num_base_bdevs": 4, 00:37:33.240 "num_base_bdevs_discovered": 4, 00:37:33.240 "num_base_bdevs_operational": 4, 00:37:33.240 "base_bdevs_list": [ 00:37:33.240 { 00:37:33.240 "name": "spare", 00:37:33.240 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:33.240 "is_configured": true, 00:37:33.240 "data_offset": 2048, 00:37:33.240 "data_size": 63488 00:37:33.240 }, 00:37:33.240 { 00:37:33.240 "name": "BaseBdev2", 00:37:33.240 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:33.240 "is_configured": true, 00:37:33.240 "data_offset": 2048, 00:37:33.240 "data_size": 63488 00:37:33.240 }, 00:37:33.240 { 00:37:33.240 "name": "BaseBdev3", 00:37:33.240 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:33.240 "is_configured": true, 00:37:33.240 "data_offset": 2048, 00:37:33.240 "data_size": 63488 00:37:33.240 }, 00:37:33.240 { 00:37:33.240 "name": "BaseBdev4", 00:37:33.240 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:33.240 "is_configured": true, 00:37:33.240 "data_offset": 2048, 00:37:33.240 "data_size": 63488 00:37:33.240 } 00:37:33.240 ] 00:37:33.240 }' 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@660 -- # break 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.240 02:07:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.499 02:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:33.499 "name": "raid_bdev1", 00:37:33.499 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:33.499 "strip_size_kb": 64, 00:37:33.499 "state": "online", 00:37:33.499 "raid_level": "raid5f", 00:37:33.499 "superblock": true, 00:37:33.499 "num_base_bdevs": 4, 00:37:33.499 "num_base_bdevs_discovered": 4, 00:37:33.499 "num_base_bdevs_operational": 4, 00:37:33.499 "base_bdevs_list": [ 00:37:33.499 { 00:37:33.499 "name": "spare", 00:37:33.499 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:33.499 "is_configured": true, 00:37:33.499 "data_offset": 2048, 00:37:33.499 "data_size": 63488 00:37:33.499 }, 00:37:33.499 { 00:37:33.499 "name": "BaseBdev2", 00:37:33.499 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:33.499 "is_configured": true, 00:37:33.499 "data_offset": 2048, 00:37:33.499 "data_size": 63488 00:37:33.499 }, 00:37:33.499 { 00:37:33.499 "name": "BaseBdev3", 00:37:33.499 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:33.499 "is_configured": true, 00:37:33.499 "data_offset": 2048, 00:37:33.499 "data_size": 63488 00:37:33.499 }, 00:37:33.499 { 00:37:33.499 "name": "BaseBdev4", 00:37:33.499 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:33.499 "is_configured": true, 00:37:33.499 "data_offset": 2048, 00:37:33.499 "data_size": 63488 00:37:33.499 } 00:37:33.499 ] 00:37:33.499 }' 00:37:33.499 02:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:33.499 02:07:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:33.499 02:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:33.756 02:07:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:33.756 02:07:33 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.757 02:07:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.015 02:07:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:34.015 "name": "raid_bdev1", 00:37:34.015 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:34.015 "strip_size_kb": 64, 00:37:34.015 "state": "online", 00:37:34.015 "raid_level": "raid5f", 00:37:34.015 "superblock": true, 00:37:34.015 "num_base_bdevs": 4, 00:37:34.015 "num_base_bdevs_discovered": 4, 00:37:34.015 "num_base_bdevs_operational": 4, 00:37:34.015 "base_bdevs_list": [ 00:37:34.015 { 00:37:34.015 "name": "spare", 00:37:34.015 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:34.015 "is_configured": true, 00:37:34.015 "data_offset": 2048, 00:37:34.015 "data_size": 63488 00:37:34.015 }, 00:37:34.015 { 00:37:34.015 "name": "BaseBdev2", 00:37:34.015 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:34.015 "is_configured": true, 00:37:34.015 "data_offset": 2048, 00:37:34.015 "data_size": 63488 00:37:34.015 }, 00:37:34.015 { 00:37:34.015 "name": "BaseBdev3", 00:37:34.015 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:34.015 "is_configured": true, 00:37:34.015 "data_offset": 2048, 00:37:34.015 "data_size": 63488 00:37:34.015 }, 00:37:34.015 { 00:37:34.015 "name": "BaseBdev4", 00:37:34.015 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:34.015 "is_configured": true, 00:37:34.015 "data_offset": 2048, 00:37:34.015 "data_size": 63488 00:37:34.015 } 00:37:34.015 ] 00:37:34.015 }' 00:37:34.015 02:07:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:34.015 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:37:34.581 02:07:34 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:34.839 [2024-04-24 02:07:34.810111] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:34.839 [2024-04-24 02:07:34.810155] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:34.839 [2024-04-24 02:07:34.810231] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:34.839 [2024-04-24 02:07:34.810334] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:34.840 [2024-04-24 02:07:34.810346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:37:34.840 02:07:34 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.840 02:07:34 -- bdev/bdev_raid.sh@671 -- # jq length 00:37:35.098 02:07:35 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:37:35.098 02:07:35 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:37:35.098 02:07:35 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@12 -- # local i 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:35.098 02:07:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:35.664 /dev/nbd0 00:37:35.664 02:07:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:35.664 02:07:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:35.664 02:07:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:35.664 02:07:35 -- common/autotest_common.sh@855 -- # local i 00:37:35.664 02:07:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:35.664 02:07:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:35.664 02:07:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:35.664 02:07:35 -- common/autotest_common.sh@859 -- # break 00:37:35.664 02:07:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:35.664 02:07:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:35.664 02:07:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:35.664 1+0 records in 00:37:35.664 1+0 records out 00:37:35.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653956 s, 6.3 MB/s 00:37:35.664 02:07:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:35.664 02:07:35 -- common/autotest_common.sh@872 -- # size=4096 00:37:35.664 02:07:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:35.664 02:07:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:35.664 02:07:35 -- common/autotest_common.sh@875 -- # return 0 00:37:35.664 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:35.664 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:35.664 02:07:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:35.922 /dev/nbd1 00:37:35.922 02:07:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:35.922 02:07:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:35.922 02:07:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:37:35.922 02:07:35 -- common/autotest_common.sh@855 -- # local i 00:37:35.922 02:07:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:35.922 02:07:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:35.922 02:07:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:37:35.922 02:07:35 -- common/autotest_common.sh@859 -- # break 00:37:35.922 02:07:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:35.922 02:07:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:35.922 02:07:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:35.922 1+0 records in 00:37:35.922 1+0 records out 00:37:35.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709352 s, 5.8 MB/s 00:37:35.922 02:07:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:35.922 02:07:35 -- common/autotest_common.sh@872 -- # size=4096 00:37:35.922 02:07:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:35.922 02:07:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:35.922 02:07:35 -- common/autotest_common.sh@875 -- # return 0 00:37:35.922 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:35.922 02:07:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:35.922 02:07:35 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:36.179 02:07:36 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:36.179 02:07:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:36.179 02:07:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:36.179 02:07:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@51 -- # local i 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@41 -- # break 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@45 -- # return 0 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:36.180 02:07:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:36.438 02:07:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:36.696 02:07:36 -- bdev/nbd_common.sh@41 -- # break 00:37:36.696 02:07:36 -- bdev/nbd_common.sh@45 -- # return 0 00:37:36.696 02:07:36 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:37:36.696 02:07:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:36.696 02:07:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:37:36.696 02:07:36 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:36.696 02:07:36 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:36.954 [2024-04-24 02:07:36.939267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:36.954 [2024-04-24 02:07:36.939392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.954 [2024-04-24 02:07:36.939437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:36.954 [2024-04-24 02:07:36.939461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.954 [2024-04-24 02:07:36.942238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.954 [2024-04-24 02:07:36.942326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:36.954 [2024-04-24 02:07:36.942478] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:36.954 [2024-04-24 02:07:36.942554] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:36.954 BaseBdev1 00:37:36.954 02:07:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:36.954 02:07:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:37:36.954 02:07:36 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:37:37.210 02:07:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:37.468 [2024-04-24 02:07:37.411380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:37.468 [2024-04-24 02:07:37.411465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:37.468 [2024-04-24 02:07:37.411510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:37:37.468 [2024-04-24 02:07:37.411532] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:37.468 [2024-04-24 02:07:37.412033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:37.468 [2024-04-24 02:07:37.412093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:37.468 [2024-04-24 02:07:37.412237] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:37:37.468 [2024-04-24 02:07:37.412261] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:37:37.468 [2024-04-24 02:07:37.412269] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:37.468 [2024-04-24 02:07:37.412290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:37:37.468 [2024-04-24 02:07:37.412384] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:37.468 BaseBdev2 00:37:37.468 02:07:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:37.468 02:07:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:37:37.468 02:07:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:37:37.730 02:07:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:38.021 [2024-04-24 02:07:37.927484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:38.021 [2024-04-24 02:07:37.927607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:38.021 [2024-04-24 02:07:37.927651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:38.021 [2024-04-24 02:07:37.927685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:38.021 [2024-04-24 02:07:37.928266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:38.021 [2024-04-24 02:07:37.928337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:38.021 [2024-04-24 02:07:37.928458] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:37:38.021 [2024-04-24 02:07:37.928483] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:38.021 BaseBdev3 00:37:38.021 02:07:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:38.021 02:07:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:37:38.021 02:07:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:37:38.280 02:07:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:37:38.280 [2024-04-24 02:07:38.359579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:37:38.280 [2024-04-24 02:07:38.359673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:38.280 [2024-04-24 02:07:38.359729] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:37:38.280 [2024-04-24 02:07:38.359759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:38.280 [2024-04-24 02:07:38.360311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:38.280 [2024-04-24 02:07:38.360378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:38.280 [2024-04-24 02:07:38.360517] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:37:38.280 [2024-04-24 02:07:38.360545] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:38.538 BaseBdev4 00:37:38.538 02:07:38 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:38.538 02:07:38 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:38.797 [2024-04-24 02:07:38.771703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:38.797 [2024-04-24 02:07:38.771805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:38.797 [2024-04-24 02:07:38.771839] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:37:38.797 [2024-04-24 02:07:38.771868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:38.797 [2024-04-24 02:07:38.772437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:38.797 [2024-04-24 02:07:38.772507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:38.797 [2024-04-24 02:07:38.772643] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:37:38.797 [2024-04-24 02:07:38.772668] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:38.797 spare 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.797 02:07:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.797 [2024-04-24 02:07:38.872793] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:37:38.797 [2024-04-24 02:07:38.872830] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:38.797 [2024-04-24 02:07:38.872988] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:37:39.056 [2024-04-24 02:07:38.881995] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:37:39.056 [2024-04-24 02:07:38.882025] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:37:39.056 [2024-04-24 02:07:38.882207] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:39.056 02:07:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:39.056 "name": "raid_bdev1", 00:37:39.056 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:39.056 "strip_size_kb": 64, 00:37:39.056 "state": "online", 00:37:39.056 "raid_level": "raid5f", 00:37:39.056 "superblock": true, 00:37:39.056 "num_base_bdevs": 4, 00:37:39.056 "num_base_bdevs_discovered": 4, 00:37:39.056 "num_base_bdevs_operational": 4, 00:37:39.056 "base_bdevs_list": [ 00:37:39.056 { 00:37:39.056 "name": "spare", 00:37:39.056 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:39.056 "is_configured": true, 00:37:39.056 "data_offset": 2048, 00:37:39.056 "data_size": 63488 00:37:39.056 }, 00:37:39.056 { 00:37:39.056 "name": "BaseBdev2", 00:37:39.056 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:39.056 "is_configured": true, 00:37:39.056 "data_offset": 2048, 00:37:39.056 "data_size": 63488 00:37:39.056 }, 00:37:39.056 { 00:37:39.056 "name": "BaseBdev3", 00:37:39.056 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:39.056 "is_configured": true, 00:37:39.056 "data_offset": 2048, 00:37:39.056 "data_size": 63488 00:37:39.056 }, 00:37:39.056 { 00:37:39.056 "name": "BaseBdev4", 00:37:39.056 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:39.056 "is_configured": true, 00:37:39.056 "data_offset": 2048, 00:37:39.056 "data_size": 63488 00:37:39.056 } 00:37:39.056 ] 00:37:39.056 }' 00:37:39.056 02:07:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:39.056 02:07:39 -- common/autotest_common.sh@10 -- # set +x 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.623 02:07:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.190 02:07:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:40.190 "name": "raid_bdev1", 00:37:40.190 "uuid": "f2b912ae-0a2f-4c3b-9c75-d093cf2de1b7", 00:37:40.190 "strip_size_kb": 64, 00:37:40.190 "state": "online", 00:37:40.190 "raid_level": "raid5f", 00:37:40.190 "superblock": true, 00:37:40.190 "num_base_bdevs": 4, 00:37:40.191 "num_base_bdevs_discovered": 4, 00:37:40.191 "num_base_bdevs_operational": 4, 00:37:40.191 "base_bdevs_list": [ 00:37:40.191 { 00:37:40.191 "name": "spare", 00:37:40.191 "uuid": "1adb6aab-b472-529a-a25c-3deaea505a37", 00:37:40.191 "is_configured": true, 00:37:40.191 "data_offset": 2048, 00:37:40.191 "data_size": 63488 00:37:40.191 }, 00:37:40.191 { 00:37:40.191 "name": "BaseBdev2", 00:37:40.191 "uuid": "e37221d4-3622-5a92-a5b7-7ae8cee75f9c", 00:37:40.191 "is_configured": true, 00:37:40.191 "data_offset": 2048, 00:37:40.191 "data_size": 63488 00:37:40.191 }, 00:37:40.191 { 00:37:40.191 "name": "BaseBdev3", 00:37:40.191 "uuid": "39339c62-ac1c-5937-b90d-d5b9d8b733ea", 00:37:40.191 "is_configured": true, 00:37:40.191 "data_offset": 2048, 00:37:40.191 "data_size": 63488 00:37:40.191 }, 00:37:40.191 { 00:37:40.191 "name": "BaseBdev4", 00:37:40.191 "uuid": "91450e2d-4a2a-505a-9c1b-47fdda92ca45", 00:37:40.191 "is_configured": true, 00:37:40.191 "data_offset": 2048, 00:37:40.191 "data_size": 63488 00:37:40.191 } 00:37:40.191 ] 00:37:40.191 }' 00:37:40.191 02:07:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:40.191 02:07:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:40.191 02:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:40.191 02:07:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:40.191 02:07:40 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:40.191 02:07:40 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:40.449 02:07:40 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:37:40.449 02:07:40 -- bdev/bdev_raid.sh@709 -- # killprocess 140893 00:37:40.449 02:07:40 -- common/autotest_common.sh@936 -- # '[' -z 140893 ']' 00:37:40.449 02:07:40 -- common/autotest_common.sh@940 -- # kill -0 140893 00:37:40.449 02:07:40 -- common/autotest_common.sh@941 -- # uname 00:37:40.449 02:07:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:40.449 02:07:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140893 00:37:40.449 02:07:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:40.449 02:07:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:40.449 02:07:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140893' 00:37:40.449 killing process with pid 140893 00:37:40.449 02:07:40 -- common/autotest_common.sh@955 -- # kill 140893 00:37:40.449 Received shutdown signal, test time was about 60.000000 seconds 00:37:40.449 00:37:40.449 Latency(us) 00:37:40.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.449 =================================================================================================================== 00:37:40.449 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:40.449 [2024-04-24 02:07:40.337764] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:40.449 [2024-04-24 02:07:40.337860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:40.449 [2024-04-24 02:07:40.337960] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:40.449 [2024-04-24 02:07:40.337972] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:37:40.449 02:07:40 -- common/autotest_common.sh@960 -- # wait 140893 00:37:41.016 [2024-04-24 02:07:40.914190] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:42.390 ************************************ 00:37:42.390 END TEST raid5f_rebuild_test_sb 00:37:42.390 ************************************ 00:37:42.390 02:07:42 -- bdev/bdev_raid.sh@711 -- # return 0 00:37:42.390 00:37:42.390 real 0m32.133s 00:37:42.390 user 0m48.219s 00:37:42.390 sys 0m4.312s 00:37:42.390 02:07:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:42.390 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:37:42.390 02:07:42 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:37:42.390 ************************************ 00:37:42.390 END TEST bdev_raid 00:37:42.390 ************************************ 00:37:42.390 00:37:42.390 real 13m27.149s 00:37:42.390 user 21m35.801s 00:37:42.390 sys 2m0.864s 00:37:42.390 02:07:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:42.390 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:37:42.390 02:07:42 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:42.390 02:07:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:42.390 02:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:42.390 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:37:42.650 ************************************ 00:37:42.650 START TEST bdevperf_config 00:37:42.650 ************************************ 00:37:42.650 02:07:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:42.650 * Looking for test storage... 00:37:42.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:37:42.650 02:07:42 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:37:42.650 02:07:42 -- bdevperf/common.sh@8 -- # local job_section=global 00:37:42.650 02:07:42 -- bdevperf/common.sh@9 -- # local rw=read 00:37:42.650 02:07:42 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:42.650 02:07:42 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:42.650 02:07:42 -- bdevperf/common.sh@13 -- # cat 00:37:42.650 02:07:42 -- bdevperf/common.sh@18 -- # job='[global]' 00:37:42.650 00:37:42.650 02:07:42 -- bdevperf/common.sh@19 -- # echo 00:37:42.650 02:07:42 -- bdevperf/common.sh@20 -- # cat 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@18 -- # create_job job0 00:37:42.650 02:07:42 -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:42.650 02:07:42 -- bdevperf/common.sh@9 -- # local rw= 00:37:42.650 02:07:42 -- bdevperf/common.sh@10 -- # local filename= 00:37:42.650 02:07:42 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:42.650 02:07:42 -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:42.650 02:07:42 -- bdevperf/common.sh@19 -- # echo 00:37:42.650 00:37:42.650 02:07:42 -- bdevperf/common.sh@20 -- # cat 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@19 -- # create_job job1 00:37:42.650 02:07:42 -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:42.650 02:07:42 -- bdevperf/common.sh@9 -- # local rw= 00:37:42.650 02:07:42 -- bdevperf/common.sh@10 -- # local filename= 00:37:42.650 02:07:42 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:42.650 02:07:42 -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:42.650 02:07:42 -- bdevperf/common.sh@19 -- # echo 00:37:42.650 00:37:42.650 02:07:42 -- bdevperf/common.sh@20 -- # cat 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@20 -- # create_job job2 00:37:42.650 02:07:42 -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:42.650 02:07:42 -- bdevperf/common.sh@9 -- # local rw= 00:37:42.650 02:07:42 -- bdevperf/common.sh@10 -- # local filename= 00:37:42.650 02:07:42 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:42.650 02:07:42 -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:42.650 02:07:42 -- bdevperf/common.sh@19 -- # echo 00:37:42.650 00:37:42.650 02:07:42 -- bdevperf/common.sh@20 -- # cat 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@21 -- # create_job job3 00:37:42.650 02:07:42 -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:42.650 02:07:42 -- bdevperf/common.sh@9 -- # local rw= 00:37:42.650 02:07:42 -- bdevperf/common.sh@10 -- # local filename= 00:37:42.650 02:07:42 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:42.650 02:07:42 -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:42.650 02:07:42 -- bdevperf/common.sh@19 -- # echo 00:37:42.650 00:37:42.650 02:07:42 -- bdevperf/common.sh@20 -- # cat 00:37:42.650 02:07:42 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:48.010 02:07:47 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-24 02:07:42.737184] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:48.010 [2024-04-24 02:07:42.737470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141691 ] 00:37:48.010 Using job config with 4 jobs 00:37:48.010 [2024-04-24 02:07:42.922077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.010 [2024-04-24 02:07:43.238919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.010 cpumask for '\''job0'\'' is too big 00:37:48.010 cpumask for '\''job1'\'' is too big 00:37:48.010 cpumask for '\''job2'\'' is too big 00:37:48.010 cpumask for '\''job3'\'' is too big 00:37:48.010 Running I/O for 2 seconds... 00:37:48.010 00:37:48.010 Latency(us) 00:37:48.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30467.53 29.75 0.00 0.00 8394.96 1497.97 12295.80 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30445.17 29.73 0.00 0.00 8387.18 1474.56 11983.73 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30423.25 29.71 0.00 0.00 8378.95 1443.35 12233.39 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30400.63 29.69 0.00 0.00 8370.68 1357.53 12170.97 00:37:48.010 =================================================================================================================== 00:37:48.010 Total : 121736.58 118.88 0.00 0.00 8382.94 1357.53 12295.80' 00:37:48.010 02:07:47 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-24 02:07:42.737184] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:48.010 [2024-04-24 02:07:42.737470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141691 ] 00:37:48.010 Using job config with 4 jobs 00:37:48.010 [2024-04-24 02:07:42.922077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.010 [2024-04-24 02:07:43.238919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.010 cpumask for '\''job0'\'' is too big 00:37:48.010 cpumask for '\''job1'\'' is too big 00:37:48.010 cpumask for '\''job2'\'' is too big 00:37:48.010 cpumask for '\''job3'\'' is too big 00:37:48.010 Running I/O for 2 seconds... 00:37:48.010 00:37:48.010 Latency(us) 00:37:48.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30467.53 29.75 0.00 0.00 8394.96 1497.97 12295.80 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30445.17 29.73 0.00 0.00 8387.18 1474.56 11983.73 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30423.25 29.71 0.00 0.00 8378.95 1443.35 12233.39 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30400.63 29.69 0.00 0.00 8370.68 1357.53 12170.97 00:37:48.010 =================================================================================================================== 00:37:48.010 Total : 121736.58 118.88 0.00 0.00 8382.94 1357.53 12295.80' 00:37:48.010 02:07:47 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 02:07:42.737184] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:48.010 [2024-04-24 02:07:42.737470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141691 ] 00:37:48.010 Using job config with 4 jobs 00:37:48.010 [2024-04-24 02:07:42.922077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.010 [2024-04-24 02:07:43.238919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.010 cpumask for '\''job0'\'' is too big 00:37:48.010 cpumask for '\''job1'\'' is too big 00:37:48.010 cpumask for '\''job2'\'' is too big 00:37:48.010 cpumask for '\''job3'\'' is too big 00:37:48.010 Running I/O for 2 seconds... 00:37:48.010 00:37:48.010 Latency(us) 00:37:48.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30467.53 29.75 0.00 0.00 8394.96 1497.97 12295.80 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30445.17 29.73 0.00 0.00 8387.18 1474.56 11983.73 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30423.25 29.71 0.00 0.00 8378.95 1443.35 12233.39 00:37:48.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:48.010 Malloc0 : 2.02 30400.63 29.69 0.00 0.00 8370.68 1357.53 12170.97 00:37:48.010 =================================================================================================================== 00:37:48.010 Total : 121736.58 118.88 0.00 0.00 8382.94 1357.53 12295.80' 00:37:48.010 02:07:47 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:48.010 02:07:47 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:48.010 02:07:47 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:37:48.010 02:07:47 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:48.010 [2024-04-24 02:07:47.657578] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:48.010 [2024-04-24 02:07:47.657902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141760 ] 00:37:48.010 [2024-04-24 02:07:47.839912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.268 [2024-04-24 02:07:48.164276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.835 cpumask for 'job0' is too big 00:37:48.835 cpumask for 'job1' is too big 00:37:48.835 cpumask for 'job2' is too big 00:37:48.835 cpumask for 'job3' is too big 00:37:53.018 02:07:52 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:37:53.018 Running I/O for 2 seconds... 00:37:53.018 00:37:53.018 Latency(us) 00:37:53.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.018 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:53.018 Malloc0 : 2.01 29773.74 29.08 0.00 0.00 8590.76 1404.34 12046.14 00:37:53.019 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:53.019 Malloc0 : 2.01 29752.05 29.05 0.00 0.00 8583.37 1365.33 10673.01 00:37:53.019 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:53.019 Malloc0 : 2.02 29795.41 29.10 0.00 0.00 8556.48 1341.93 9799.19 00:37:53.019 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:53.019 Malloc0 : 2.02 29773.42 29.08 0.00 0.00 8547.94 1396.54 9799.19 00:37:53.019 =================================================================================================================== 00:37:53.019 Total : 119094.63 116.30 0.00 0.00 8569.60 1341.93 12046.14' 00:37:53.019 02:07:52 -- bdevperf/test_config.sh@27 -- # cleanup 00:37:53.019 02:07:52 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:53.019 02:07:52 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:53.019 02:07:52 -- bdevperf/common.sh@9 -- # local rw=write 00:37:53.019 02:07:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:53.019 02:07:52 -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:53.019 00:37:53.019 02:07:52 -- bdevperf/common.sh@19 -- # echo 00:37:53.019 02:07:52 -- bdevperf/common.sh@20 -- # cat 00:37:53.019 02:07:52 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:53.019 02:07:52 -- bdevperf/common.sh@9 -- # local rw=write 00:37:53.019 02:07:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:53.019 02:07:52 -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:53.019 00:37:53.019 02:07:52 -- bdevperf/common.sh@19 -- # echo 00:37:53.019 02:07:52 -- bdevperf/common.sh@20 -- # cat 00:37:53.019 02:07:52 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:53.019 02:07:52 -- bdevperf/common.sh@9 -- # local rw=write 00:37:53.019 02:07:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:53.019 02:07:52 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:53.019 02:07:52 -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:53.019 00:37:53.019 02:07:52 -- bdevperf/common.sh@19 -- # echo 00:37:53.019 02:07:52 -- bdevperf/common.sh@20 -- # cat 00:37:53.019 02:07:52 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:57.216 02:07:57 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-24 02:07:52.500476] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:57.216 [2024-04-24 02:07:52.500608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141816 ] 00:37:57.216 Using job config with 3 jobs 00:37:57.216 [2024-04-24 02:07:52.657621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.216 [2024-04-24 02:07:52.900792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.216 cpumask for '\''job0'\'' is too big 00:37:57.216 cpumask for '\''job1'\'' is too big 00:37:57.216 cpumask for '\''job2'\'' is too big 00:37:57.216 Running I/O for 2 seconds... 00:37:57.216 00:37:57.216 Latency(us) 00:37:57.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39876.98 38.94 0.00 0.00 6413.42 1451.15 9549.53 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39848.68 38.91 0.00 0.00 6406.31 1529.17 7926.74 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39909.26 38.97 0.00 0.00 6384.64 670.96 7489.83 00:37:57.216 =================================================================================================================== 00:37:57.216 Total : 119634.92 116.83 0.00 0.00 6401.44 670.96 9549.53' 00:37:57.216 02:07:57 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-24 02:07:52.500476] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:57.216 [2024-04-24 02:07:52.500608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141816 ] 00:37:57.216 Using job config with 3 jobs 00:37:57.216 [2024-04-24 02:07:52.657621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.216 [2024-04-24 02:07:52.900792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.216 cpumask for '\''job0'\'' is too big 00:37:57.216 cpumask for '\''job1'\'' is too big 00:37:57.216 cpumask for '\''job2'\'' is too big 00:37:57.216 Running I/O for 2 seconds... 00:37:57.216 00:37:57.216 Latency(us) 00:37:57.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39876.98 38.94 0.00 0.00 6413.42 1451.15 9549.53 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39848.68 38.91 0.00 0.00 6406.31 1529.17 7926.74 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39909.26 38.97 0.00 0.00 6384.64 670.96 7489.83 00:37:57.216 =================================================================================================================== 00:37:57.216 Total : 119634.92 116.83 0.00 0.00 6401.44 670.96 9549.53' 00:37:57.216 02:07:57 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 02:07:52.500476] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:37:57.216 [2024-04-24 02:07:52.500608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141816 ] 00:37:57.216 Using job config with 3 jobs 00:37:57.216 [2024-04-24 02:07:52.657621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.216 [2024-04-24 02:07:52.900792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.216 cpumask for '\''job0'\'' is too big 00:37:57.216 cpumask for '\''job1'\'' is too big 00:37:57.216 cpumask for '\''job2'\'' is too big 00:37:57.216 Running I/O for 2 seconds... 00:37:57.216 00:37:57.216 Latency(us) 00:37:57.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39876.98 38.94 0.00 0.00 6413.42 1451.15 9549.53 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39848.68 38.91 0.00 0.00 6406.31 1529.17 7926.74 00:37:57.216 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:57.216 Malloc0 : 2.01 39909.26 38.97 0.00 0.00 6384.64 670.96 7489.83 00:37:57.216 =================================================================================================================== 00:37:57.217 Total : 119634.92 116.83 0.00 0.00 6401.44 670.96 9549.53' 00:37:57.217 02:07:57 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:57.217 02:07:57 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@35 -- # cleanup 00:37:57.217 02:07:57 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:37:57.217 02:07:57 -- bdevperf/common.sh@8 -- # local job_section=global 00:37:57.217 02:07:57 -- bdevperf/common.sh@9 -- # local rw=rw 00:37:57.217 02:07:57 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:37:57.217 02:07:57 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:57.217 02:07:57 -- bdevperf/common.sh@13 -- # cat 00:37:57.217 02:07:57 -- bdevperf/common.sh@18 -- # job='[global]' 00:37:57.217 00:37:57.217 02:07:57 -- bdevperf/common.sh@19 -- # echo 00:37:57.217 02:07:57 -- bdevperf/common.sh@20 -- # cat 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@38 -- # create_job job0 00:37:57.217 02:07:57 -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:57.217 02:07:57 -- bdevperf/common.sh@9 -- # local rw= 00:37:57.217 02:07:57 -- bdevperf/common.sh@10 -- # local filename= 00:37:57.217 02:07:57 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:57.217 02:07:57 -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:57.217 00:37:57.217 02:07:57 -- bdevperf/common.sh@19 -- # echo 00:37:57.217 02:07:57 -- bdevperf/common.sh@20 -- # cat 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@39 -- # create_job job1 00:37:57.217 02:07:57 -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:57.217 02:07:57 -- bdevperf/common.sh@9 -- # local rw= 00:37:57.217 02:07:57 -- bdevperf/common.sh@10 -- # local filename= 00:37:57.217 02:07:57 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:57.217 02:07:57 -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:57.217 00:37:57.217 02:07:57 -- bdevperf/common.sh@19 -- # echo 00:37:57.217 02:07:57 -- bdevperf/common.sh@20 -- # cat 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@40 -- # create_job job2 00:37:57.217 02:07:57 -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:57.217 02:07:57 -- bdevperf/common.sh@9 -- # local rw= 00:37:57.217 02:07:57 -- bdevperf/common.sh@10 -- # local filename= 00:37:57.217 02:07:57 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:57.217 02:07:57 -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:57.217 02:07:57 -- bdevperf/common.sh@19 -- # echo 00:37:57.217 00:37:57.217 02:07:57 -- bdevperf/common.sh@20 -- # cat 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@41 -- # create_job job3 00:37:57.217 02:07:57 -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:57.217 02:07:57 -- bdevperf/common.sh@9 -- # local rw= 00:37:57.217 02:07:57 -- bdevperf/common.sh@10 -- # local filename= 00:37:57.217 02:07:57 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:57.217 02:07:57 -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:57.217 00:37:57.217 02:07:57 -- bdevperf/common.sh@19 -- # echo 00:37:57.217 02:07:57 -- bdevperf/common.sh@20 -- # cat 00:37:57.217 02:07:57 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:02.490 02:08:01 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-24 02:07:57.243545] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:02.490 [2024-04-24 02:07:57.243732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141887 ] 00:38:02.490 Using job config with 4 jobs 00:38:02.490 [2024-04-24 02:07:57.422236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.490 [2024-04-24 02:07:57.655147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.490 cpumask for '\''job0'\'' is too big 00:38:02.490 cpumask for '\''job1'\'' is too big 00:38:02.490 cpumask for '\''job2'\'' is too big 00:38:02.490 cpumask for '\''job3'\'' is too big 00:38:02.490 Running I/O for 2 seconds... 00:38:02.490 00:38:02.490 Latency(us) 00:38:02.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.490 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc0 : 2.03 15262.68 14.90 0.00 0.00 16758.40 4244.24 27712.37 00:38:02.490 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc1 : 2.03 15251.21 14.89 0.00 0.00 16753.14 4993.22 26838.55 00:38:02.490 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc0 : 2.03 15241.36 14.88 0.00 0.00 16699.27 3573.27 22344.66 00:38:02.490 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc1 : 2.03 15231.11 14.87 0.00 0.00 16696.67 3807.33 22219.82 00:38:02.490 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc0 : 2.04 15220.31 14.86 0.00 0.00 16658.81 3011.54 19099.06 00:38:02.490 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.490 Malloc1 : 2.04 15210.03 14.85 0.00 0.00 16661.11 3573.27 18974.23 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.04 15294.64 14.94 0.00 0.00 16522.53 2683.86 18599.74 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.04 15283.81 14.93 0.00 0.00 16522.71 2168.93 18724.57 00:38:02.491 =================================================================================================================== 00:38:02.491 Total : 121995.16 119.14 0.00 0.00 16658.80 2168.93 27712.37' 00:38:02.491 02:08:01 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-24 02:07:57.243545] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:02.491 [2024-04-24 02:07:57.243732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141887 ] 00:38:02.491 Using job config with 4 jobs 00:38:02.491 [2024-04-24 02:07:57.422236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.491 [2024-04-24 02:07:57.655147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.491 cpumask for '\''job0'\'' is too big 00:38:02.491 cpumask for '\''job1'\'' is too big 00:38:02.491 cpumask for '\''job2'\'' is too big 00:38:02.491 cpumask for '\''job3'\'' is too big 00:38:02.491 Running I/O for 2 seconds... 00:38:02.491 00:38:02.491 Latency(us) 00:38:02.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.03 15262.68 14.90 0.00 0.00 16758.40 4244.24 27712.37 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.03 15251.21 14.89 0.00 0.00 16753.14 4993.22 26838.55 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.03 15241.36 14.88 0.00 0.00 16699.27 3573.27 22344.66 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.03 15231.11 14.87 0.00 0.00 16696.67 3807.33 22219.82 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.04 15220.31 14.86 0.00 0.00 16658.81 3011.54 19099.06 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.04 15210.03 14.85 0.00 0.00 16661.11 3573.27 18974.23 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.04 15294.64 14.94 0.00 0.00 16522.53 2683.86 18599.74 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.04 15283.81 14.93 0.00 0.00 16522.71 2168.93 18724.57 00:38:02.491 =================================================================================================================== 00:38:02.491 Total : 121995.16 119.14 0.00 0.00 16658.80 2168.93 27712.37' 00:38:02.491 02:08:01 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 02:07:57.243545] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:02.491 [2024-04-24 02:07:57.243732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141887 ] 00:38:02.491 Using job config with 4 jobs 00:38:02.491 [2024-04-24 02:07:57.422236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.491 [2024-04-24 02:07:57.655147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.491 cpumask for '\''job0'\'' is too big 00:38:02.491 cpumask for '\''job1'\'' is too big 00:38:02.491 cpumask for '\''job2'\'' is too big 00:38:02.491 cpumask for '\''job3'\'' is too big 00:38:02.491 Running I/O for 2 seconds... 00:38:02.491 00:38:02.491 Latency(us) 00:38:02.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.03 15262.68 14.90 0.00 0.00 16758.40 4244.24 27712.37 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.03 15251.21 14.89 0.00 0.00 16753.14 4993.22 26838.55 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.03 15241.36 14.88 0.00 0.00 16699.27 3573.27 22344.66 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.03 15231.11 14.87 0.00 0.00 16696.67 3807.33 22219.82 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.04 15220.31 14.86 0.00 0.00 16658.81 3011.54 19099.06 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.04 15210.03 14.85 0.00 0.00 16661.11 3573.27 18974.23 00:38:02.491 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc0 : 2.04 15294.64 14.94 0.00 0.00 16522.53 2683.86 18599.74 00:38:02.491 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:02.491 Malloc1 : 2.04 15283.81 14.93 0.00 0.00 16522.71 2168.93 18724.57 00:38:02.491 =================================================================================================================== 00:38:02.491 Total : 121995.16 119.14 0.00 0.00 16658.80 2168.93 27712.37' 00:38:02.491 02:08:01 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:02.491 02:08:01 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:02.491 02:08:01 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:38:02.491 02:08:01 -- bdevperf/test_config.sh@44 -- # cleanup 00:38:02.491 02:08:01 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:02.491 02:08:01 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:38:02.491 ************************************ 00:38:02.491 END TEST bdevperf_config 00:38:02.491 ************************************ 00:38:02.491 00:38:02.491 real 0m19.464s 00:38:02.491 user 0m17.664s 00:38:02.491 sys 0m1.257s 00:38:02.491 02:08:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:02.491 02:08:01 -- common/autotest_common.sh@10 -- # set +x 00:38:02.491 02:08:02 -- spdk/autotest.sh@188 -- # uname -s 00:38:02.491 02:08:02 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:38:02.491 02:08:02 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:02.491 02:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:02.491 02:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:02.491 02:08:02 -- common/autotest_common.sh@10 -- # set +x 00:38:02.491 ************************************ 00:38:02.491 START TEST reactor_set_interrupt 00:38:02.491 ************************************ 00:38:02.491 02:08:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:02.491 * Looking for test storage... 00:38:02.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.491 02:08:02 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:02.491 02:08:02 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:02.491 02:08:02 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:02.491 02:08:02 -- common/autotest_common.sh@34 -- # set -e 00:38:02.491 02:08:02 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:02.491 02:08:02 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:02.491 02:08:02 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:02.491 02:08:02 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:02.491 02:08:02 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:02.491 02:08:02 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:02.491 02:08:02 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:02.491 02:08:02 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:02.491 02:08:02 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:02.491 02:08:02 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:02.491 02:08:02 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:02.491 02:08:02 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:02.491 02:08:02 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:02.491 02:08:02 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:02.491 02:08:02 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:02.491 02:08:02 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:02.491 02:08:02 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:02.492 02:08:02 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:02.492 02:08:02 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:02.492 02:08:02 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:02.492 02:08:02 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:02.492 02:08:02 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:02.492 02:08:02 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:02.492 02:08:02 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:02.492 02:08:02 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:02.492 02:08:02 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:02.492 02:08:02 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:02.492 02:08:02 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:02.492 02:08:02 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:02.492 02:08:02 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:02.492 02:08:02 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:02.492 02:08:02 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:02.492 02:08:02 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:02.492 02:08:02 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:02.492 02:08:02 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:02.492 02:08:02 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:02.492 02:08:02 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:02.492 02:08:02 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:02.492 02:08:02 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:38:02.492 02:08:02 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:02.492 02:08:02 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:02.492 02:08:02 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:02.492 02:08:02 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:02.492 02:08:02 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:02.492 02:08:02 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:02.492 02:08:02 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:02.492 02:08:02 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:38:02.492 02:08:02 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:38:02.492 02:08:02 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:02.492 02:08:02 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:38:02.492 02:08:02 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:38:02.492 02:08:02 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:38:02.492 02:08:02 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:38:02.492 02:08:02 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:38:02.492 02:08:02 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:38:02.492 02:08:02 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:38:02.492 02:08:02 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:38:02.492 02:08:02 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:38:02.492 02:08:02 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:38:02.492 02:08:02 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:38:02.492 02:08:02 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:38:02.492 02:08:02 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:38:02.492 02:08:02 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:38:02.492 02:08:02 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:38:02.492 02:08:02 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:38:02.492 02:08:02 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:38:02.492 02:08:02 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:02.492 02:08:02 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:38:02.492 02:08:02 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:38:02.492 02:08:02 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:38:02.492 02:08:02 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:38:02.492 02:08:02 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:38:02.492 02:08:02 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:38:02.492 02:08:02 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:38:02.492 02:08:02 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:38:02.492 02:08:02 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:38:02.492 02:08:02 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:38:02.492 02:08:02 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:38:02.492 02:08:02 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:02.492 02:08:02 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:38:02.492 02:08:02 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:38:02.492 02:08:02 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:02.492 02:08:02 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:02.492 02:08:02 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:02.492 02:08:02 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:02.492 02:08:02 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:02.492 02:08:02 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:02.492 02:08:02 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:02.492 02:08:02 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:02.492 02:08:02 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:02.492 02:08:02 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:02.492 02:08:02 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:02.492 02:08:02 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:02.492 02:08:02 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:02.492 02:08:02 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:02.492 02:08:02 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:02.492 02:08:02 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:02.492 #define SPDK_CONFIG_H 00:38:02.492 #define SPDK_CONFIG_APPS 1 00:38:02.492 #define SPDK_CONFIG_ARCH native 00:38:02.492 #define SPDK_CONFIG_ASAN 1 00:38:02.492 #undef SPDK_CONFIG_AVAHI 00:38:02.492 #undef SPDK_CONFIG_CET 00:38:02.492 #define SPDK_CONFIG_COVERAGE 1 00:38:02.492 #define SPDK_CONFIG_CROSS_PREFIX 00:38:02.492 #undef SPDK_CONFIG_CRYPTO 00:38:02.492 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:02.492 #undef SPDK_CONFIG_CUSTOMOCF 00:38:02.492 #undef SPDK_CONFIG_DAOS 00:38:02.492 #define SPDK_CONFIG_DAOS_DIR 00:38:02.492 #define SPDK_CONFIG_DEBUG 1 00:38:02.492 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:02.492 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:38:02.492 #define SPDK_CONFIG_DPDK_INC_DIR 00:38:02.492 #define SPDK_CONFIG_DPDK_LIB_DIR 00:38:02.492 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:02.492 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:02.492 #define SPDK_CONFIG_EXAMPLES 1 00:38:02.492 #undef SPDK_CONFIG_FC 00:38:02.492 #define SPDK_CONFIG_FC_PATH 00:38:02.492 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:02.492 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:02.492 #undef SPDK_CONFIG_FUSE 00:38:02.492 #undef SPDK_CONFIG_FUZZER 00:38:02.492 #define SPDK_CONFIG_FUZZER_LIB 00:38:02.492 #undef SPDK_CONFIG_GOLANG 00:38:02.492 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:02.492 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:02.492 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:02.492 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:02.492 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:02.492 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:02.492 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:02.492 #define SPDK_CONFIG_IDXD 1 00:38:02.492 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:02.492 #undef SPDK_CONFIG_IPSEC_MB 00:38:02.492 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:02.492 #define SPDK_CONFIG_ISAL 1 00:38:02.492 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:02.492 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:02.492 #define SPDK_CONFIG_LIBDIR 00:38:02.492 #undef SPDK_CONFIG_LTO 00:38:02.492 #define SPDK_CONFIG_MAX_LCORES 00:38:02.492 #define SPDK_CONFIG_NVME_CUSE 1 00:38:02.492 #undef SPDK_CONFIG_OCF 00:38:02.492 #define SPDK_CONFIG_OCF_PATH 00:38:02.492 #define SPDK_CONFIG_OPENSSL_PATH 00:38:02.492 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:02.492 #define SPDK_CONFIG_PGO_DIR 00:38:02.492 #undef SPDK_CONFIG_PGO_USE 00:38:02.492 #define SPDK_CONFIG_PREFIX /usr/local 00:38:02.492 #define SPDK_CONFIG_RAID5F 1 00:38:02.492 #undef SPDK_CONFIG_RBD 00:38:02.492 #define SPDK_CONFIG_RDMA 1 00:38:02.492 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:02.492 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:02.492 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:02.492 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:02.492 #undef SPDK_CONFIG_SHARED 00:38:02.492 #undef SPDK_CONFIG_SMA 00:38:02.492 #define SPDK_CONFIG_TESTS 1 00:38:02.492 #undef SPDK_CONFIG_TSAN 00:38:02.492 #undef SPDK_CONFIG_UBLK 00:38:02.492 #define SPDK_CONFIG_UBSAN 1 00:38:02.492 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:02.492 #undef SPDK_CONFIG_URING 00:38:02.492 #define SPDK_CONFIG_URING_PATH 00:38:02.492 #undef SPDK_CONFIG_URING_ZNS 00:38:02.492 #undef SPDK_CONFIG_USDT 00:38:02.492 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:02.492 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:02.492 #undef SPDK_CONFIG_VFIO_USER 00:38:02.492 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:02.492 #define SPDK_CONFIG_VHOST 1 00:38:02.492 #define SPDK_CONFIG_VIRTIO 1 00:38:02.492 #undef SPDK_CONFIG_VTUNE 00:38:02.492 #define SPDK_CONFIG_VTUNE_DIR 00:38:02.492 #define SPDK_CONFIG_WERROR 1 00:38:02.492 #define SPDK_CONFIG_WPDK_DIR 00:38:02.492 #undef SPDK_CONFIG_XNVME 00:38:02.492 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:02.492 02:08:02 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:02.492 02:08:02 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:02.492 02:08:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.492 02:08:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.492 02:08:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.492 02:08:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:02.493 02:08:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:02.493 02:08:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:02.493 02:08:02 -- paths/export.sh@5 -- # export PATH 00:38:02.493 02:08:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:02.493 02:08:02 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:02.493 02:08:02 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:02.493 02:08:02 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:02.493 02:08:02 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:02.493 02:08:02 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:02.493 02:08:02 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:02.493 02:08:02 -- pm/common@67 -- # TEST_TAG=N/A 00:38:02.493 02:08:02 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:02.493 02:08:02 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:02.493 02:08:02 -- pm/common@71 -- # uname -s 00:38:02.493 02:08:02 -- pm/common@71 -- # PM_OS=Linux 00:38:02.493 02:08:02 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:02.493 02:08:02 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:38:02.493 02:08:02 -- pm/common@76 -- # [[ Linux == Linux ]] 00:38:02.493 02:08:02 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:38:02.493 02:08:02 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:38:02.493 02:08:02 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:38:02.493 02:08:02 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:38:02.493 02:08:02 -- common/autotest_common.sh@57 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:38:02.493 02:08:02 -- common/autotest_common.sh@61 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:02.493 02:08:02 -- common/autotest_common.sh@63 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:38:02.493 02:08:02 -- common/autotest_common.sh@65 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:02.493 02:08:02 -- common/autotest_common.sh@67 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:38:02.493 02:08:02 -- common/autotest_common.sh@69 -- # : 00:38:02.493 02:08:02 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:38:02.493 02:08:02 -- common/autotest_common.sh@71 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:38:02.493 02:08:02 -- common/autotest_common.sh@73 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:38:02.493 02:08:02 -- common/autotest_common.sh@75 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:38:02.493 02:08:02 -- common/autotest_common.sh@77 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:02.493 02:08:02 -- common/autotest_common.sh@79 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:38:02.493 02:08:02 -- common/autotest_common.sh@81 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:38:02.493 02:08:02 -- common/autotest_common.sh@83 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:38:02.493 02:08:02 -- common/autotest_common.sh@85 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:38:02.493 02:08:02 -- common/autotest_common.sh@87 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:38:02.493 02:08:02 -- common/autotest_common.sh@89 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:38:02.493 02:08:02 -- common/autotest_common.sh@91 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:38:02.493 02:08:02 -- common/autotest_common.sh@93 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:38:02.493 02:08:02 -- common/autotest_common.sh@95 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:02.493 02:08:02 -- common/autotest_common.sh@97 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:38:02.493 02:08:02 -- common/autotest_common.sh@99 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:38:02.493 02:08:02 -- common/autotest_common.sh@101 -- # : rdma 00:38:02.493 02:08:02 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:02.493 02:08:02 -- common/autotest_common.sh@103 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:38:02.493 02:08:02 -- common/autotest_common.sh@105 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:38:02.493 02:08:02 -- common/autotest_common.sh@107 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:38:02.493 02:08:02 -- common/autotest_common.sh@109 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:38:02.493 02:08:02 -- common/autotest_common.sh@111 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:38:02.493 02:08:02 -- common/autotest_common.sh@113 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:38:02.493 02:08:02 -- common/autotest_common.sh@115 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:38:02.493 02:08:02 -- common/autotest_common.sh@117 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:02.493 02:08:02 -- common/autotest_common.sh@119 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:38:02.493 02:08:02 -- common/autotest_common.sh@121 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:38:02.493 02:08:02 -- common/autotest_common.sh@123 -- # : 00:38:02.493 02:08:02 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:02.493 02:08:02 -- common/autotest_common.sh@125 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:38:02.493 02:08:02 -- common/autotest_common.sh@127 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:38:02.493 02:08:02 -- common/autotest_common.sh@129 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:38:02.493 02:08:02 -- common/autotest_common.sh@131 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:38:02.493 02:08:02 -- common/autotest_common.sh@133 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:38:02.493 02:08:02 -- common/autotest_common.sh@135 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:38:02.493 02:08:02 -- common/autotest_common.sh@137 -- # : 00:38:02.493 02:08:02 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:38:02.493 02:08:02 -- common/autotest_common.sh@139 -- # : true 00:38:02.493 02:08:02 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:38:02.493 02:08:02 -- common/autotest_common.sh@141 -- # : 1 00:38:02.493 02:08:02 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:38:02.493 02:08:02 -- common/autotest_common.sh@143 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:38:02.493 02:08:02 -- common/autotest_common.sh@145 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:38:02.493 02:08:02 -- common/autotest_common.sh@147 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:38:02.493 02:08:02 -- common/autotest_common.sh@149 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:38:02.493 02:08:02 -- common/autotest_common.sh@151 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:38:02.493 02:08:02 -- common/autotest_common.sh@153 -- # : 00:38:02.493 02:08:02 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:38:02.493 02:08:02 -- common/autotest_common.sh@155 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:38:02.493 02:08:02 -- common/autotest_common.sh@157 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:38:02.493 02:08:02 -- common/autotest_common.sh@159 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:38:02.493 02:08:02 -- common/autotest_common.sh@161 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:38:02.493 02:08:02 -- common/autotest_common.sh@163 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:38:02.493 02:08:02 -- common/autotest_common.sh@166 -- # : 00:38:02.493 02:08:02 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:38:02.493 02:08:02 -- common/autotest_common.sh@168 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:38:02.493 02:08:02 -- common/autotest_common.sh@170 -- # : 0 00:38:02.493 02:08:02 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:02.493 02:08:02 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:02.493 02:08:02 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:02.493 02:08:02 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:02.493 02:08:02 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:02.493 02:08:02 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:02.493 02:08:02 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:02.494 02:08:02 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:02.494 02:08:02 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:02.494 02:08:02 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:02.494 02:08:02 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:02.494 02:08:02 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:02.494 02:08:02 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:38:02.494 02:08:02 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:02.494 02:08:02 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:02.494 02:08:02 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:02.494 02:08:02 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:02.494 02:08:02 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:02.494 02:08:02 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:38:02.494 02:08:02 -- common/autotest_common.sh@199 -- # cat 00:38:02.494 02:08:02 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:38:02.494 02:08:02 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:02.494 02:08:02 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:02.494 02:08:02 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:02.494 02:08:02 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:02.494 02:08:02 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:38:02.494 02:08:02 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:38:02.494 02:08:02 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:02.494 02:08:02 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:02.494 02:08:02 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:02.494 02:08:02 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:02.494 02:08:02 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:38:02.494 02:08:02 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:38:02.494 02:08:02 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:02.494 02:08:02 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:02.494 02:08:02 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:02.494 02:08:02 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:02.494 02:08:02 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:38:02.494 02:08:02 -- common/autotest_common.sh@252 -- # export valgrind= 00:38:02.494 02:08:02 -- common/autotest_common.sh@252 -- # valgrind= 00:38:02.494 02:08:02 -- common/autotest_common.sh@258 -- # uname -s 00:38:02.494 02:08:02 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:38:02.494 02:08:02 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:38:02.494 02:08:02 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:38:02.494 02:08:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@268 -- # MAKE=make 00:38:02.494 02:08:02 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:38:02.494 02:08:02 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:38:02.494 02:08:02 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:38:02.494 02:08:02 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:38:02.494 02:08:02 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:38:02.494 02:08:02 -- common/autotest_common.sh@307 -- # [[ -z 141988 ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@307 -- # kill -0 141988 00:38:02.494 02:08:02 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:38:02.494 02:08:02 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:38:02.494 02:08:02 -- common/autotest_common.sh@320 -- # local mount target_dir 00:38:02.494 02:08:02 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:38:02.494 02:08:02 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:38:02.494 02:08:02 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:38:02.494 02:08:02 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:38:02.494 02:08:02 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.zbs5il 00:38:02.494 02:08:02 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:02.494 02:08:02 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.zbs5il/tests/interrupt /tmp/spdk.zbs5il 00:38:02.494 02:08:02 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@316 -- # df -T 00:38:02.494 02:08:02 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248956416 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4726784 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=10263990272 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=10336026624 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=6263685120 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268395520 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:38:02.494 02:08:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=94212263936 00:38:02.494 02:08:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:38:02.494 02:08:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=5490515968 00:38:02.494 02:08:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:02.494 02:08:02 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:38:02.494 * Looking for test storage... 00:38:02.494 02:08:02 -- common/autotest_common.sh@357 -- # local target_space new_size 00:38:02.494 02:08:02 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:38:02.494 02:08:02 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.494 02:08:02 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.494 02:08:02 -- common/autotest_common.sh@361 -- # mount=/ 00:38:02.494 02:08:02 -- common/autotest_common.sh@363 -- # target_space=10263990272 00:38:02.494 02:08:02 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:38:02.494 02:08:02 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:38:02.494 02:08:02 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:38:02.494 02:08:02 -- common/autotest_common.sh@370 -- # new_size=12550619136 00:38:02.494 02:08:02 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:02.495 02:08:02 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.495 02:08:02 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.495 02:08:02 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:02.495 02:08:02 -- common/autotest_common.sh@378 -- # return 0 00:38:02.495 02:08:02 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:38:02.495 02:08:02 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:38:02.495 02:08:02 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:02.495 02:08:02 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:02.495 02:08:02 -- common/autotest_common.sh@1673 -- # true 00:38:02.495 02:08:02 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:38:02.495 02:08:02 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:02.495 02:08:02 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:02.495 02:08:02 -- common/autotest_common.sh@27 -- # exec 00:38:02.495 02:08:02 -- common/autotest_common.sh@29 -- # exec 00:38:02.495 02:08:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:02.495 02:08:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:02.495 02:08:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:02.495 02:08:02 -- common/autotest_common.sh@18 -- # set -x 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:02.495 02:08:02 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:02.495 02:08:02 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:02.495 02:08:02 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142038 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:02.495 02:08:02 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142038 /var/tmp/spdk.sock 00:38:02.495 02:08:02 -- common/autotest_common.sh@817 -- # '[' -z 142038 ']' 00:38:02.495 02:08:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.495 02:08:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:02.495 02:08:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.495 02:08:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:02.495 02:08:02 -- common/autotest_common.sh@10 -- # set +x 00:38:02.495 [2024-04-24 02:08:02.442691] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:02.495 [2024-04-24 02:08:02.442896] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142038 ] 00:38:02.754 [2024-04-24 02:08:02.647662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:03.013 [2024-04-24 02:08:02.947452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.013 [2024-04-24 02:08:02.947613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.013 [2024-04-24 02:08:02.947613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:03.272 [2024-04-24 02:08:03.300734] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:03.531 02:08:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:03.531 02:08:03 -- common/autotest_common.sh@850 -- # return 0 00:38:03.531 02:08:03 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:38:03.531 02:08:03 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:03.789 Malloc0 00:38:03.789 Malloc1 00:38:03.789 Malloc2 00:38:03.789 02:08:03 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:38:03.789 02:08:03 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:38:03.789 02:08:03 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:03.789 02:08:03 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:04.048 5000+0 records in 00:38:04.048 5000+0 records out 00:38:04.048 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0312779 s, 327 MB/s 00:38:04.048 02:08:03 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:04.306 AIO0 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 142038 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 142038 without_thd 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142038 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:38:04.306 02:08:04 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:04.306 02:08:04 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:38:04.564 02:08:04 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:38:04.564 02:08:04 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:04.564 02:08:04 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:38:04.822 02:08:04 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:38:04.822 02:08:04 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:38:04.822 spdk_thread ids are 1 on reactor0. 00:38:04.822 02:08:04 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:04.822 02:08:04 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142038 0 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142038 0 idle 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:04.822 02:08:04 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:05.081 02:08:04 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142038 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:01.02 reactor_0' 00:38:05.081 02:08:04 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:05.081 02:08:04 -- interrupt/interrupt_common.sh@48 -- # echo 142038 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:01.02 reactor_0 00:38:05.081 02:08:04 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:05.081 02:08:05 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:05.081 02:08:05 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142038 1 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142038 1 idle 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:05.081 02:08:05 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142042 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:00.00 reactor_1' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # echo 142042 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:00.00 reactor_1 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:05.341 02:08:05 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:05.341 02:08:05 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142038 2 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142038 2 idle 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142043 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:00.00 reactor_2' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # echo 142043 root 20 0 20.1t 148976 31396 S 0.0 1.2 0:00.00 reactor_2 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:05.341 02:08:05 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:05.341 02:08:05 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:38:05.341 02:08:05 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:38:05.341 02:08:05 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:38:05.600 [2024-04-24 02:08:05.665924] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:05.859 02:08:05 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:38:06.120 [2024-04-24 02:08:05.981552] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:38:06.120 [2024-04-24 02:08:05.982427] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:06.120 02:08:05 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:38:06.378 [2024-04-24 02:08:06.281434] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:38:06.379 [2024-04-24 02:08:06.282284] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:06.379 02:08:06 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:06.379 02:08:06 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142038 0 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142038 0 busy 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:06.379 02:08:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142038 root 20 0 20.1t 149112 31396 R 99.9 1.2 0:01.52 reactor_0' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # echo 142038 root 20 0 20.1t 149112 31396 R 99.9 1.2 0:01.52 reactor_0 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:06.637 02:08:06 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:06.637 02:08:06 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142038 2 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142038 2 busy 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142043 root 20 0 20.1t 149112 31396 R 99.9 1.2 0:00.37 reactor_2' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # echo 142043 root 20 0 20.1t 149112 31396 R 99.9 1.2 0:00.37 reactor_2 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:38:06.637 02:08:06 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:06.637 02:08:06 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:38:06.895 [2024-04-24 02:08:06.917457] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:38:06.895 [2024-04-24 02:08:06.918287] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:06.895 02:08:06 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:38:06.895 02:08:06 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142038 2 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142038 2 idle 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:06.895 02:08:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142043 root 20 0 20.1t 149168 31396 S 0.0 1.2 0:00.63 reactor_2' 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@48 -- # echo 142043 root 20 0 20.1t 149168 31396 S 0.0 1.2 0:00.63 reactor_2 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:07.154 02:08:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:07.154 02:08:07 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:07.414 [2024-04-24 02:08:07.289419] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:07.414 [2024-04-24 02:08:07.290221] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:07.414 02:08:07 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:38:07.414 02:08:07 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:38:07.414 02:08:07 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:38:07.672 [2024-04-24 02:08:07.586135] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.672 02:08:07 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142038 0 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142038 0 idle 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@33 -- # local pid=142038 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142038 -w 256 00:38:07.672 02:08:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:07.929 02:08:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142038 root 20 0 20.1t 149260 31396 S 0.0 1.2 0:02.34 reactor_0' 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@48 -- # echo 142038 root 20 0 20.1t 149260 31396 S 0.0 1.2 0:02.34 reactor_0 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:07.930 02:08:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:07.930 02:08:07 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:07.930 02:08:07 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:38:07.930 02:08:07 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:38:07.930 02:08:07 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 142038 00:38:07.930 02:08:07 -- common/autotest_common.sh@936 -- # '[' -z 142038 ']' 00:38:07.930 02:08:07 -- common/autotest_common.sh@940 -- # kill -0 142038 00:38:07.930 02:08:07 -- common/autotest_common.sh@941 -- # uname 00:38:07.930 02:08:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:07.930 02:08:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142038 00:38:07.930 02:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:07.930 02:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:07.930 02:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142038' 00:38:07.930 killing process with pid 142038 00:38:07.930 02:08:07 -- common/autotest_common.sh@955 -- # kill 142038 00:38:07.930 02:08:07 -- common/autotest_common.sh@960 -- # wait 142038 00:38:09.306 [2024-04-24 02:08:09.203229] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:38:09.565 02:08:09 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:38:09.565 02:08:09 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:09.824 02:08:09 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:38:09.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142199 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:09.824 02:08:09 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142199 /var/tmp/spdk.sock 00:38:09.824 02:08:09 -- common/autotest_common.sh@817 -- # '[' -z 142199 ']' 00:38:09.824 02:08:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.824 02:08:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:09.824 02:08:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.824 02:08:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:09.824 02:08:09 -- common/autotest_common.sh@10 -- # set +x 00:38:09.824 [2024-04-24 02:08:09.719276] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:09.824 [2024-04-24 02:08:09.719880] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142199 ] 00:38:10.082 [2024-04-24 02:08:09.909014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:10.082 [2024-04-24 02:08:10.123034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.082 [2024-04-24 02:08:10.123139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:10.082 [2024-04-24 02:08:10.123321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.648 [2024-04-24 02:08:10.469910] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:10.648 02:08:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:10.648 02:08:10 -- common/autotest_common.sh@850 -- # return 0 00:38:10.648 02:08:10 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:38:10.648 02:08:10 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:10.907 Malloc0 00:38:10.907 Malloc1 00:38:10.907 Malloc2 00:38:10.907 02:08:10 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:38:10.907 02:08:10 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:38:10.907 02:08:10 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:10.907 02:08:10 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:11.165 5000+0 records in 00:38:11.165 5000+0 records out 00:38:11.165 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0363626 s, 282 MB/s 00:38:11.165 02:08:11 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:11.165 AIO0 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 142199 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 142199 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142199 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:38:11.429 02:08:11 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:11.429 02:08:11 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:38:11.686 02:08:11 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:38:11.686 02:08:11 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:38:11.686 spdk_thread ids are 1 on reactor0. 00:38:11.686 02:08:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:11.686 02:08:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142199 0 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142199 0 idle 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:11.686 02:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:11.687 02:08:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:11.687 02:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:11.687 02:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:11.687 02:08:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:11.687 02:08:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142199 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.85 reactor_0' 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@48 -- # echo 142199 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.85 reactor_0 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:11.944 02:08:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:11.944 02:08:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142199 1 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142199 1 idle 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:11.944 02:08:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142202 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.00 reactor_1' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 142202 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.00 reactor_1 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:12.202 02:08:12 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:12.202 02:08:12 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142199 2 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142199 2 idle 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142203 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.00 reactor_2' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 142203 root 20 0 20.1t 149020 31460 S 0.0 1.2 0:00.00 reactor_2 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:12.202 02:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:12.202 02:08:12 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:38:12.202 02:08:12 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:38:12.459 [2024-04-24 02:08:12.388585] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:38:12.459 [2024-04-24 02:08:12.388881] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:38:12.459 [2024-04-24 02:08:12.389113] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:12.460 02:08:12 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:38:12.717 [2024-04-24 02:08:12.576290] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:38:12.717 [2024-04-24 02:08:12.577061] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:12.717 02:08:12 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:12.717 02:08:12 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142199 0 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142199 0 busy 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142199 root 20 0 20.1t 149076 31460 R 99.9 1.2 0:01.23 reactor_0' 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 142199 root 20 0 20.1t 149076 31460 R 99.9 1.2 0:01.23 reactor_0 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:12.717 02:08:12 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:12.717 02:08:12 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142199 2 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142199 2 busy 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:12.717 02:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142203 root 20 0 20.1t 149076 31460 R 99.9 1.2 0:00.35 reactor_2' 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 142203 root 20 0 20.1t 149076 31460 R 99.9 1.2 0:00.35 reactor_2 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:38:12.976 02:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:12.976 02:08:12 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:38:13.234 [2024-04-24 02:08:13.212548] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:38:13.234 [2024-04-24 02:08:13.213054] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:13.234 02:08:13 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:38:13.234 02:08:13 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142199 2 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142199 2 idle 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:13.234 02:08:13 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142203 root 20 0 20.1t 149176 31460 S 0.0 1.2 0:00.63 reactor_2' 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@48 -- # echo 142203 root 20 0 20.1t 149176 31460 S 0.0 1.2 0:00.63 reactor_2 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:13.497 02:08:13 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:13.497 02:08:13 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:13.756 [2024-04-24 02:08:13.676592] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:13.756 [2024-04-24 02:08:13.677270] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:38:13.756 [2024-04-24 02:08:13.677411] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:13.756 02:08:13 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:38:13.756 02:08:13 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142199 0 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142199 0 idle 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@33 -- # local pid=142199 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@41 -- # hash top 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142199 -w 256 00:38:13.756 02:08:13 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142199 root 20 0 20.1t 149220 31460 S 0.0 1.2 0:02.15 reactor_0' 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@48 -- # echo 142199 root 20 0 20.1t 149220 31460 S 0.0 1.2 0:02.15 reactor_0 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:38:14.014 02:08:13 -- interrupt/interrupt_common.sh@56 -- # return 0 00:38:14.014 02:08:13 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:14.014 02:08:13 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:38:14.014 02:08:13 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:38:14.014 02:08:13 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 142199 00:38:14.014 02:08:13 -- common/autotest_common.sh@936 -- # '[' -z 142199 ']' 00:38:14.014 02:08:13 -- common/autotest_common.sh@940 -- # kill -0 142199 00:38:14.014 02:08:13 -- common/autotest_common.sh@941 -- # uname 00:38:14.014 02:08:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:14.014 02:08:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142199 00:38:14.014 02:08:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:14.014 02:08:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:14.014 02:08:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142199' 00:38:14.014 killing process with pid 142199 00:38:14.014 02:08:13 -- common/autotest_common.sh@955 -- # kill 142199 00:38:14.014 02:08:13 -- common/autotest_common.sh@960 -- # wait 142199 00:38:15.385 [2024-04-24 02:08:15.077429] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:38:15.642 02:08:15 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:38:15.642 02:08:15 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:15.642 ************************************ 00:38:15.642 END TEST reactor_set_interrupt 00:38:15.643 ************************************ 00:38:15.643 00:38:15.643 real 0m13.473s 00:38:15.643 user 0m14.025s 00:38:15.643 sys 0m2.047s 00:38:15.643 02:08:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:15.643 02:08:15 -- common/autotest_common.sh@10 -- # set +x 00:38:15.643 02:08:15 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:15.643 02:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:15.643 02:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:15.643 02:08:15 -- common/autotest_common.sh@10 -- # set +x 00:38:15.643 ************************************ 00:38:15.643 START TEST reap_unregistered_poller 00:38:15.643 ************************************ 00:38:15.643 02:08:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:15.902 * Looking for test storage... 00:38:15.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.902 02:08:15 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:15.902 02:08:15 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:15.902 02:08:15 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:15.902 02:08:15 -- common/autotest_common.sh@34 -- # set -e 00:38:15.902 02:08:15 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:15.902 02:08:15 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:15.902 02:08:15 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:15.902 02:08:15 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:15.902 02:08:15 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:15.902 02:08:15 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:15.902 02:08:15 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:15.902 02:08:15 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:15.902 02:08:15 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:15.903 02:08:15 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:15.903 02:08:15 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:15.903 02:08:15 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:15.903 02:08:15 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:15.903 02:08:15 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:15.903 02:08:15 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:15.903 02:08:15 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:15.903 02:08:15 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:15.903 02:08:15 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:15.903 02:08:15 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:15.903 02:08:15 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:15.903 02:08:15 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:15.903 02:08:15 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:15.903 02:08:15 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:15.903 02:08:15 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:15.903 02:08:15 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:15.903 02:08:15 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:15.903 02:08:15 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:15.903 02:08:15 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:15.903 02:08:15 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:15.903 02:08:15 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:15.903 02:08:15 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:15.903 02:08:15 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:15.903 02:08:15 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:15.903 02:08:15 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:15.903 02:08:15 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:15.903 02:08:15 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:15.903 02:08:15 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:15.903 02:08:15 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:15.903 02:08:15 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:38:15.903 02:08:15 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:15.903 02:08:15 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:15.903 02:08:15 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:15.903 02:08:15 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:15.903 02:08:15 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:15.903 02:08:15 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:15.903 02:08:15 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:15.903 02:08:15 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:38:15.903 02:08:15 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:38:15.903 02:08:15 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:15.903 02:08:15 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:38:15.903 02:08:15 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:38:15.903 02:08:15 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:38:15.903 02:08:15 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:38:15.903 02:08:15 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:38:15.903 02:08:15 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:38:15.903 02:08:15 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:38:15.903 02:08:15 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:38:15.903 02:08:15 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:38:15.903 02:08:15 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:38:15.903 02:08:15 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:38:15.903 02:08:15 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:38:15.903 02:08:15 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:38:15.903 02:08:15 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:38:15.903 02:08:15 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:38:15.903 02:08:15 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:38:15.903 02:08:15 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:38:15.903 02:08:15 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:15.903 02:08:15 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:38:15.903 02:08:15 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:38:15.903 02:08:15 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:38:15.903 02:08:15 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:38:15.903 02:08:15 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:38:15.903 02:08:15 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:38:15.903 02:08:15 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:38:15.903 02:08:15 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:38:15.903 02:08:15 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:38:15.903 02:08:15 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:38:15.903 02:08:15 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:38:15.903 02:08:15 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:15.903 02:08:15 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:38:15.903 02:08:15 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:38:15.903 02:08:15 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:15.903 02:08:15 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:15.903 02:08:15 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:15.903 02:08:15 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:15.903 02:08:15 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:15.903 02:08:15 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:15.903 02:08:15 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:15.903 02:08:15 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:15.903 02:08:15 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:15.903 02:08:15 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:15.903 02:08:15 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:15.903 02:08:15 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:15.903 02:08:15 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:15.903 02:08:15 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:15.903 02:08:15 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:15.903 02:08:15 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:15.903 #define SPDK_CONFIG_H 00:38:15.903 #define SPDK_CONFIG_APPS 1 00:38:15.903 #define SPDK_CONFIG_ARCH native 00:38:15.903 #define SPDK_CONFIG_ASAN 1 00:38:15.903 #undef SPDK_CONFIG_AVAHI 00:38:15.903 #undef SPDK_CONFIG_CET 00:38:15.903 #define SPDK_CONFIG_COVERAGE 1 00:38:15.903 #define SPDK_CONFIG_CROSS_PREFIX 00:38:15.903 #undef SPDK_CONFIG_CRYPTO 00:38:15.903 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:15.903 #undef SPDK_CONFIG_CUSTOMOCF 00:38:15.903 #undef SPDK_CONFIG_DAOS 00:38:15.903 #define SPDK_CONFIG_DAOS_DIR 00:38:15.903 #define SPDK_CONFIG_DEBUG 1 00:38:15.903 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:15.903 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:38:15.903 #define SPDK_CONFIG_DPDK_INC_DIR 00:38:15.903 #define SPDK_CONFIG_DPDK_LIB_DIR 00:38:15.903 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:15.903 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:15.903 #define SPDK_CONFIG_EXAMPLES 1 00:38:15.903 #undef SPDK_CONFIG_FC 00:38:15.903 #define SPDK_CONFIG_FC_PATH 00:38:15.903 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:15.903 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:15.903 #undef SPDK_CONFIG_FUSE 00:38:15.903 #undef SPDK_CONFIG_FUZZER 00:38:15.903 #define SPDK_CONFIG_FUZZER_LIB 00:38:15.903 #undef SPDK_CONFIG_GOLANG 00:38:15.903 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:15.903 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:15.903 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:15.903 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:15.903 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:15.903 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:15.903 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:15.903 #define SPDK_CONFIG_IDXD 1 00:38:15.903 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:15.903 #undef SPDK_CONFIG_IPSEC_MB 00:38:15.903 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:15.903 #define SPDK_CONFIG_ISAL 1 00:38:15.903 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:15.903 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:15.903 #define SPDK_CONFIG_LIBDIR 00:38:15.903 #undef SPDK_CONFIG_LTO 00:38:15.903 #define SPDK_CONFIG_MAX_LCORES 00:38:15.903 #define SPDK_CONFIG_NVME_CUSE 1 00:38:15.903 #undef SPDK_CONFIG_OCF 00:38:15.903 #define SPDK_CONFIG_OCF_PATH 00:38:15.903 #define SPDK_CONFIG_OPENSSL_PATH 00:38:15.903 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:15.904 #define SPDK_CONFIG_PGO_DIR 00:38:15.904 #undef SPDK_CONFIG_PGO_USE 00:38:15.904 #define SPDK_CONFIG_PREFIX /usr/local 00:38:15.904 #define SPDK_CONFIG_RAID5F 1 00:38:15.904 #undef SPDK_CONFIG_RBD 00:38:15.904 #define SPDK_CONFIG_RDMA 1 00:38:15.904 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:15.904 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:15.904 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:15.904 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:15.904 #undef SPDK_CONFIG_SHARED 00:38:15.904 #undef SPDK_CONFIG_SMA 00:38:15.904 #define SPDK_CONFIG_TESTS 1 00:38:15.904 #undef SPDK_CONFIG_TSAN 00:38:15.904 #undef SPDK_CONFIG_UBLK 00:38:15.904 #define SPDK_CONFIG_UBSAN 1 00:38:15.904 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:15.904 #undef SPDK_CONFIG_URING 00:38:15.904 #define SPDK_CONFIG_URING_PATH 00:38:15.904 #undef SPDK_CONFIG_URING_ZNS 00:38:15.904 #undef SPDK_CONFIG_USDT 00:38:15.904 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:15.904 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:15.904 #undef SPDK_CONFIG_VFIO_USER 00:38:15.904 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:15.904 #define SPDK_CONFIG_VHOST 1 00:38:15.904 #define SPDK_CONFIG_VIRTIO 1 00:38:15.904 #undef SPDK_CONFIG_VTUNE 00:38:15.904 #define SPDK_CONFIG_VTUNE_DIR 00:38:15.904 #define SPDK_CONFIG_WERROR 1 00:38:15.904 #define SPDK_CONFIG_WPDK_DIR 00:38:15.904 #undef SPDK_CONFIG_XNVME 00:38:15.904 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:15.904 02:08:15 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:15.904 02:08:15 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:15.904 02:08:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.904 02:08:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.904 02:08:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.904 02:08:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:15.904 02:08:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:15.904 02:08:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:15.904 02:08:15 -- paths/export.sh@5 -- # export PATH 00:38:15.904 02:08:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:15.904 02:08:15 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:15.904 02:08:15 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:15.904 02:08:15 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:15.904 02:08:15 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:15.904 02:08:15 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:15.904 02:08:15 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:15.904 02:08:15 -- pm/common@67 -- # TEST_TAG=N/A 00:38:15.904 02:08:15 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:15.904 02:08:15 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:15.904 02:08:15 -- pm/common@71 -- # uname -s 00:38:15.904 02:08:15 -- pm/common@71 -- # PM_OS=Linux 00:38:15.904 02:08:15 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:15.904 02:08:15 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:38:15.904 02:08:15 -- pm/common@76 -- # [[ Linux == Linux ]] 00:38:15.904 02:08:15 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:38:15.904 02:08:15 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:38:15.904 02:08:15 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:38:15.904 02:08:15 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:38:15.904 02:08:15 -- common/autotest_common.sh@57 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:38:15.904 02:08:15 -- common/autotest_common.sh@61 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:15.904 02:08:15 -- common/autotest_common.sh@63 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:38:15.904 02:08:15 -- common/autotest_common.sh@65 -- # : 1 00:38:15.904 02:08:15 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:15.904 02:08:15 -- common/autotest_common.sh@67 -- # : 1 00:38:15.904 02:08:15 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:38:15.904 02:08:15 -- common/autotest_common.sh@69 -- # : 00:38:15.904 02:08:15 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:38:15.904 02:08:15 -- common/autotest_common.sh@71 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:38:15.904 02:08:15 -- common/autotest_common.sh@73 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:38:15.904 02:08:15 -- common/autotest_common.sh@75 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:38:15.904 02:08:15 -- common/autotest_common.sh@77 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:15.904 02:08:15 -- common/autotest_common.sh@79 -- # : 1 00:38:15.904 02:08:15 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:38:15.904 02:08:15 -- common/autotest_common.sh@81 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:38:15.904 02:08:15 -- common/autotest_common.sh@83 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:38:15.904 02:08:15 -- common/autotest_common.sh@85 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:38:15.904 02:08:15 -- common/autotest_common.sh@87 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:38:15.904 02:08:15 -- common/autotest_common.sh@89 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:38:15.904 02:08:15 -- common/autotest_common.sh@91 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:38:15.904 02:08:15 -- common/autotest_common.sh@93 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:38:15.904 02:08:15 -- common/autotest_common.sh@95 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:15.904 02:08:15 -- common/autotest_common.sh@97 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:38:15.904 02:08:15 -- common/autotest_common.sh@99 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:38:15.904 02:08:15 -- common/autotest_common.sh@101 -- # : rdma 00:38:15.904 02:08:15 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:15.904 02:08:15 -- common/autotest_common.sh@103 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:38:15.904 02:08:15 -- common/autotest_common.sh@105 -- # : 0 00:38:15.904 02:08:15 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:38:15.904 02:08:15 -- common/autotest_common.sh@107 -- # : 1 00:38:15.905 02:08:15 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:38:15.905 02:08:15 -- common/autotest_common.sh@109 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:38:15.905 02:08:15 -- common/autotest_common.sh@111 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:38:15.905 02:08:15 -- common/autotest_common.sh@113 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:38:15.905 02:08:15 -- common/autotest_common.sh@115 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:38:15.905 02:08:15 -- common/autotest_common.sh@117 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:15.905 02:08:15 -- common/autotest_common.sh@119 -- # : 1 00:38:15.905 02:08:15 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:38:15.905 02:08:15 -- common/autotest_common.sh@121 -- # : 1 00:38:15.905 02:08:15 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:38:15.905 02:08:15 -- common/autotest_common.sh@123 -- # : 00:38:15.905 02:08:15 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:15.905 02:08:15 -- common/autotest_common.sh@125 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:38:15.905 02:08:15 -- common/autotest_common.sh@127 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:38:15.905 02:08:15 -- common/autotest_common.sh@129 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:38:15.905 02:08:15 -- common/autotest_common.sh@131 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:38:15.905 02:08:15 -- common/autotest_common.sh@133 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:38:15.905 02:08:15 -- common/autotest_common.sh@135 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:38:15.905 02:08:15 -- common/autotest_common.sh@137 -- # : 00:38:15.905 02:08:15 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:38:15.905 02:08:15 -- common/autotest_common.sh@139 -- # : true 00:38:15.905 02:08:15 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:38:15.905 02:08:15 -- common/autotest_common.sh@141 -- # : 1 00:38:15.905 02:08:15 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:38:15.905 02:08:15 -- common/autotest_common.sh@143 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:38:15.905 02:08:15 -- common/autotest_common.sh@145 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:38:15.905 02:08:15 -- common/autotest_common.sh@147 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:38:15.905 02:08:15 -- common/autotest_common.sh@149 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:38:15.905 02:08:15 -- common/autotest_common.sh@151 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:38:15.905 02:08:15 -- common/autotest_common.sh@153 -- # : 00:38:15.905 02:08:15 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:38:15.905 02:08:15 -- common/autotest_common.sh@155 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:38:15.905 02:08:15 -- common/autotest_common.sh@157 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:38:15.905 02:08:15 -- common/autotest_common.sh@159 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:38:15.905 02:08:15 -- common/autotest_common.sh@161 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:38:15.905 02:08:15 -- common/autotest_common.sh@163 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:38:15.905 02:08:15 -- common/autotest_common.sh@166 -- # : 00:38:15.905 02:08:15 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:38:15.905 02:08:15 -- common/autotest_common.sh@168 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:38:15.905 02:08:15 -- common/autotest_common.sh@170 -- # : 0 00:38:15.905 02:08:15 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:15.905 02:08:15 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:15.905 02:08:15 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:15.905 02:08:15 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:15.905 02:08:15 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:15.905 02:08:15 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:15.905 02:08:15 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:15.905 02:08:15 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:38:15.905 02:08:15 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:15.905 02:08:15 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:15.905 02:08:15 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:15.905 02:08:15 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:15.905 02:08:15 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:15.905 02:08:15 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:38:15.905 02:08:15 -- common/autotest_common.sh@199 -- # cat 00:38:15.905 02:08:15 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:38:15.905 02:08:15 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:15.905 02:08:15 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:15.905 02:08:15 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:15.905 02:08:15 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:15.905 02:08:15 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:38:15.905 02:08:15 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:38:15.905 02:08:15 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:15.905 02:08:15 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:15.905 02:08:15 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:15.905 02:08:15 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:15.905 02:08:15 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:38:15.905 02:08:15 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:38:15.905 02:08:15 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:15.905 02:08:15 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:15.905 02:08:15 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:15.905 02:08:15 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:15.905 02:08:15 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:15.905 02:08:15 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:15.905 02:08:15 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:38:15.906 02:08:15 -- common/autotest_common.sh@252 -- # export valgrind= 00:38:15.906 02:08:15 -- common/autotest_common.sh@252 -- # valgrind= 00:38:15.906 02:08:15 -- common/autotest_common.sh@258 -- # uname -s 00:38:15.906 02:08:15 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:38:15.906 02:08:15 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:38:15.906 02:08:15 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:38:15.906 02:08:15 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:38:15.906 02:08:15 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@268 -- # MAKE=make 00:38:15.906 02:08:15 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:38:15.906 02:08:15 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:38:15.906 02:08:15 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:38:15.906 02:08:15 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:38:15.906 02:08:15 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:38:15.906 02:08:15 -- common/autotest_common.sh@307 -- # [[ -z 142386 ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@307 -- # kill -0 142386 00:38:15.906 02:08:15 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:38:15.906 02:08:15 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:38:15.906 02:08:15 -- common/autotest_common.sh@320 -- # local mount target_dir 00:38:15.906 02:08:15 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:38:15.906 02:08:15 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:38:15.906 02:08:15 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:38:15.906 02:08:15 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:38:15.906 02:08:15 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.gC80s2 00:38:15.906 02:08:15 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:15.906 02:08:15 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.gC80s2/tests/interrupt /tmp/spdk.gC80s2 00:38:15.906 02:08:15 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@316 -- # df -T 00:38:15.906 02:08:15 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248964608 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=4718592 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=10263945216 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=10336071680 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=6263685120 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268395520 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:38:15.906 02:08:15 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # avails["$mount"]=94709903360 00:38:15.906 02:08:15 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:38:15.906 02:08:15 -- common/autotest_common.sh@352 -- # uses["$mount"]=4992876544 00:38:15.906 02:08:15 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:38:15.906 02:08:15 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:38:15.906 * Looking for test storage... 00:38:15.906 02:08:15 -- common/autotest_common.sh@357 -- # local target_space new_size 00:38:15.906 02:08:15 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:38:15.906 02:08:15 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.906 02:08:15 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:15.906 02:08:15 -- common/autotest_common.sh@361 -- # mount=/ 00:38:15.906 02:08:15 -- common/autotest_common.sh@363 -- # target_space=10263945216 00:38:15.906 02:08:15 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:38:15.906 02:08:15 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:38:15.906 02:08:15 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@370 -- # new_size=12550664192 00:38:15.906 02:08:15 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:15.906 02:08:15 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.906 02:08:15 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.906 02:08:15 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:15.906 02:08:15 -- common/autotest_common.sh@378 -- # return 0 00:38:15.906 02:08:15 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:38:15.906 02:08:15 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:38:15.906 02:08:15 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:15.906 02:08:15 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:15.906 02:08:15 -- common/autotest_common.sh@1673 -- # true 00:38:15.906 02:08:15 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:38:15.906 02:08:15 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:15.906 02:08:15 -- common/autotest_common.sh@27 -- # exec 00:38:15.906 02:08:15 -- common/autotest_common.sh@29 -- # exec 00:38:15.906 02:08:15 -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:15.906 02:08:15 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:15.906 02:08:15 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:15.906 02:08:15 -- common/autotest_common.sh@18 -- # set -x 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:38:15.906 02:08:15 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:15.906 02:08:15 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:15.907 02:08:15 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:15.907 02:08:15 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142431 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142431 /var/tmp/spdk.sock 00:38:15.907 02:08:15 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:15.907 02:08:15 -- common/autotest_common.sh@817 -- # '[' -z 142431 ']' 00:38:15.907 02:08:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.907 02:08:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:15.907 02:08:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.907 02:08:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:15.907 02:08:15 -- common/autotest_common.sh@10 -- # set +x 00:38:16.165 [2024-04-24 02:08:16.043977] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:16.165 [2024-04-24 02:08:16.044423] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142431 ] 00:38:16.165 [2024-04-24 02:08:16.239811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:16.730 [2024-04-24 02:08:16.541424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.730 [2024-04-24 02:08:16.541577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.730 [2024-04-24 02:08:16.541580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.987 [2024-04-24 02:08:16.891625] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:16.987 02:08:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:16.987 02:08:17 -- common/autotest_common.sh@850 -- # return 0 00:38:16.987 02:08:17 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:38:16.987 02:08:17 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:38:16.987 02:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:38:16.987 02:08:17 -- common/autotest_common.sh@10 -- # set +x 00:38:16.987 02:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:38:16.987 02:08:17 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:38:16.987 "name": "app_thread", 00:38:16.987 "id": 1, 00:38:16.987 "active_pollers": [], 00:38:16.987 "timed_pollers": [ 00:38:16.987 { 00:38:16.987 "name": "rpc_subsystem_poll_servers", 00:38:16.987 "id": 1, 00:38:16.987 "state": "waiting", 00:38:16.987 "run_count": 0, 00:38:16.987 "busy_count": 0, 00:38:16.987 "period_ticks": 8400000 00:38:16.987 } 00:38:16.987 ], 00:38:16.987 "paused_pollers": [] 00:38:16.987 }' 00:38:16.987 02:08:17 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:38:17.244 02:08:17 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:38:17.244 02:08:17 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:38:17.244 02:08:17 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:38:17.244 02:08:17 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:38:17.244 02:08:17 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:38:17.244 02:08:17 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:38:17.244 02:08:17 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:17.244 02:08:17 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:17.244 5000+0 records in 00:38:17.244 5000+0 records out 00:38:17.244 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0290588 s, 352 MB/s 00:38:17.244 02:08:17 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:17.545 AIO0 00:38:17.545 02:08:17 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:38:17.818 02:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:38:17.818 02:08:17 -- common/autotest_common.sh@10 -- # set +x 00:38:17.818 02:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:38:17.818 "name": "app_thread", 00:38:17.818 "id": 1, 00:38:17.818 "active_pollers": [], 00:38:17.818 "timed_pollers": [ 00:38:17.818 { 00:38:17.818 "name": "rpc_subsystem_poll_servers", 00:38:17.818 "id": 1, 00:38:17.818 "state": "waiting", 00:38:17.818 "run_count": 0, 00:38:17.818 "busy_count": 0, 00:38:17.818 "period_ticks": 8400000 00:38:17.818 } 00:38:17.818 ], 00:38:17.818 "paused_pollers": [] 00:38:17.818 }' 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:38:17.818 02:08:17 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:38:18.076 02:08:17 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:38:18.076 02:08:17 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:38:18.076 02:08:17 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:38:18.076 02:08:17 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 142431 00:38:18.076 02:08:17 -- common/autotest_common.sh@936 -- # '[' -z 142431 ']' 00:38:18.076 02:08:17 -- common/autotest_common.sh@940 -- # kill -0 142431 00:38:18.076 02:08:17 -- common/autotest_common.sh@941 -- # uname 00:38:18.076 02:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:18.076 02:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142431 00:38:18.076 02:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:18.076 02:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:18.076 02:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142431' 00:38:18.076 killing process with pid 142431 00:38:18.076 02:08:17 -- common/autotest_common.sh@955 -- # kill 142431 00:38:18.076 02:08:17 -- common/autotest_common.sh@960 -- # wait 142431 00:38:19.009 [2024-04-24 02:08:18.900871] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:38:19.574 02:08:19 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:38:19.574 02:08:19 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:19.574 ************************************ 00:38:19.574 END TEST reap_unregistered_poller 00:38:19.574 ************************************ 00:38:19.574 00:38:19.574 real 0m3.695s 00:38:19.574 user 0m3.247s 00:38:19.574 sys 0m0.625s 00:38:19.574 02:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:19.574 02:08:19 -- common/autotest_common.sh@10 -- # set +x 00:38:19.574 02:08:19 -- spdk/autotest.sh@194 -- # uname -s 00:38:19.574 02:08:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:38:19.574 02:08:19 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:38:19.574 02:08:19 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:38:19.574 02:08:19 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:19.574 02:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:19.574 02:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:19.574 02:08:19 -- common/autotest_common.sh@10 -- # set +x 00:38:19.574 ************************************ 00:38:19.574 START TEST spdk_dd 00:38:19.574 ************************************ 00:38:19.574 02:08:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:19.574 * Looking for test storage... 00:38:19.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:19.575 02:08:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:19.575 02:08:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.575 02:08:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.575 02:08:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.575 02:08:19 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.575 02:08:19 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.575 02:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.575 02:08:19 -- paths/export.sh@5 -- # export PATH 00:38:19.575 02:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.575 02:08:19 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:20.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:38:20.141 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:21.077 02:08:20 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:38:21.077 02:08:20 -- dd/dd.sh@11 -- # nvme_in_userspace 00:38:21.077 02:08:20 -- scripts/common.sh@309 -- # local bdf bdfs 00:38:21.077 02:08:20 -- scripts/common.sh@310 -- # local nvmes 00:38:21.077 02:08:20 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:38:21.077 02:08:20 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:21.077 02:08:20 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:38:21.077 02:08:20 -- scripts/common.sh@295 -- # local bdf= 00:38:21.077 02:08:20 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:38:21.077 02:08:20 -- scripts/common.sh@230 -- # local class 00:38:21.077 02:08:20 -- scripts/common.sh@231 -- # local subclass 00:38:21.077 02:08:20 -- scripts/common.sh@232 -- # local progif 00:38:21.077 02:08:20 -- scripts/common.sh@233 -- # printf %02x 1 00:38:21.077 02:08:20 -- scripts/common.sh@233 -- # class=01 00:38:21.077 02:08:20 -- scripts/common.sh@234 -- # printf %02x 8 00:38:21.077 02:08:20 -- scripts/common.sh@234 -- # subclass=08 00:38:21.077 02:08:20 -- scripts/common.sh@235 -- # printf %02x 2 00:38:21.077 02:08:20 -- scripts/common.sh@235 -- # progif=02 00:38:21.077 02:08:20 -- scripts/common.sh@237 -- # hash lspci 00:38:21.077 02:08:20 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:38:21.077 02:08:20 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:38:21.077 02:08:20 -- scripts/common.sh@240 -- # grep -i -- -p02 00:38:21.077 02:08:20 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:21.077 02:08:20 -- scripts/common.sh@242 -- # tr -d '"' 00:38:21.077 02:08:20 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:21.077 02:08:20 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:38:21.077 02:08:20 -- scripts/common.sh@15 -- # local i 00:38:21.077 02:08:20 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:38:21.077 02:08:20 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:21.077 02:08:20 -- scripts/common.sh@24 -- # return 0 00:38:21.077 02:08:20 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:38:21.077 02:08:20 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:21.077 02:08:20 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:21.077 02:08:21 -- scripts/common.sh@320 -- # uname -s 00:38:21.077 02:08:21 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:21.077 02:08:21 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:21.077 02:08:21 -- scripts/common.sh@325 -- # (( 1 )) 00:38:21.077 02:08:21 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:38:21.077 02:08:21 -- dd/dd.sh@13 -- # check_liburing 00:38:21.077 02:08:21 -- dd/common.sh@139 -- # local lib so 00:38:21.077 02:08:21 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:38:21.077 02:08:21 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:38:21.077 02:08:21 -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:21.077 02:08:21 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:38:21.077 02:08:21 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:21.077 02:08:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:38:21.077 02:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:21.077 02:08:21 -- common/autotest_common.sh@10 -- # set +x 00:38:21.077 ************************************ 00:38:21.077 START TEST spdk_dd_basic_rw 00:38:21.077 ************************************ 00:38:21.077 02:08:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:21.336 * Looking for test storage... 00:38:21.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:21.336 02:08:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:21.336 02:08:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:21.336 02:08:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.336 02:08:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.336 02:08:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:21.336 02:08:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:21.336 02:08:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:21.336 02:08:21 -- paths/export.sh@5 -- # export PATH 00:38:21.336 02:08:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:21.336 02:08:21 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:38:21.336 02:08:21 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:38:21.336 02:08:21 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:38:21.336 02:08:21 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:38:21.336 02:08:21 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:38:21.336 02:08:21 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:21.336 02:08:21 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:38:21.336 02:08:21 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:21.336 02:08:21 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:21.336 02:08:21 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:38:21.336 02:08:21 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:38:21.336 02:08:21 -- dd/common.sh@126 -- # mapfile -t id 00:38:21.336 02:08:21 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:38:21.597 02:08:21 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 99 Data Units Written: 7 Host Read Commands: 2195 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:38:21.597 02:08:21 -- dd/common.sh@130 -- # lbaf=04 00:38:21.598 02:08:21 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 99 Data Units Written: 7 Host Read Commands: 2195 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:38:21.598 02:08:21 -- dd/common.sh@132 -- # lbaf=4096 00:38:21.598 02:08:21 -- dd/common.sh@134 -- # echo 4096 00:38:21.598 02:08:21 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:38:21.598 02:08:21 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:21.598 02:08:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:38:21.598 02:08:21 -- dd/basic_rw.sh@96 -- # gen_conf 00:38:21.598 02:08:21 -- dd/basic_rw.sh@96 -- # : 00:38:21.598 02:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:21.598 02:08:21 -- dd/common.sh@31 -- # xtrace_disable 00:38:21.598 02:08:21 -- common/autotest_common.sh@10 -- # set +x 00:38:21.598 02:08:21 -- common/autotest_common.sh@10 -- # set +x 00:38:21.598 ************************************ 00:38:21.598 START TEST dd_bs_lt_native_bs 00:38:21.598 ************************************ 00:38:21.598 02:08:21 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:21.598 { 00:38:21.598 "subsystems": [ 00:38:21.598 { 00:38:21.598 "subsystem": "bdev", 00:38:21.598 "config": [ 00:38:21.598 { 00:38:21.598 "params": { 00:38:21.598 "trtype": "pcie", 00:38:21.598 "traddr": "0000:00:10.0", 00:38:21.598 "name": "Nvme0" 00:38:21.598 }, 00:38:21.598 "method": "bdev_nvme_attach_controller" 00:38:21.598 }, 00:38:21.598 { 00:38:21.598 "method": "bdev_wait_for_examine" 00:38:21.598 } 00:38:21.598 ] 00:38:21.598 } 00:38:21.598 ] 00:38:21.598 } 00:38:21.598 02:08:21 -- common/autotest_common.sh@638 -- # local es=0 00:38:21.598 02:08:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:21.598 02:08:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:21.598 02:08:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:38:21.598 02:08:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:21.598 02:08:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:38:21.598 02:08:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:21.598 02:08:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:38:21.598 02:08:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:21.598 02:08:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:21.598 02:08:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:21.598 [2024-04-24 02:08:21.654075] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:21.598 [2024-04-24 02:08:21.654471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142769 ] 00:38:21.857 [2024-04-24 02:08:21.833468] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.116 [2024-04-24 02:08:22.141562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.684 [2024-04-24 02:08:22.629324] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:38:22.684 [2024-04-24 02:08:22.629566] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:23.618 [2024-04-24 02:08:23.600293] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:24.184 ************************************ 00:38:24.184 END TEST dd_bs_lt_native_bs 00:38:24.184 ************************************ 00:38:24.184 02:08:24 -- common/autotest_common.sh@641 -- # es=234 00:38:24.184 02:08:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:38:24.184 02:08:24 -- common/autotest_common.sh@650 -- # es=106 00:38:24.184 02:08:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:38:24.184 02:08:24 -- common/autotest_common.sh@658 -- # es=1 00:38:24.184 02:08:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:38:24.184 00:38:24.184 real 0m2.529s 00:38:24.184 user 0m2.185s 00:38:24.184 sys 0m0.305s 00:38:24.184 02:08:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:24.184 02:08:24 -- common/autotest_common.sh@10 -- # set +x 00:38:24.184 02:08:24 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:38:24.184 02:08:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:38:24.184 02:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:24.184 02:08:24 -- common/autotest_common.sh@10 -- # set +x 00:38:24.184 ************************************ 00:38:24.184 START TEST dd_rw 00:38:24.184 ************************************ 00:38:24.184 02:08:24 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:38:24.184 02:08:24 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:38:24.184 02:08:24 -- dd/basic_rw.sh@12 -- # local count size 00:38:24.184 02:08:24 -- dd/basic_rw.sh@13 -- # local qds bss 00:38:24.184 02:08:24 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:38:24.184 02:08:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.184 02:08:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.184 02:08:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.184 02:08:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.184 02:08:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.184 02:08:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.184 02:08:24 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:24.184 02:08:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:24.184 02:08:24 -- dd/basic_rw.sh@23 -- # count=15 00:38:24.184 02:08:24 -- dd/basic_rw.sh@24 -- # count=15 00:38:24.184 02:08:24 -- dd/basic_rw.sh@25 -- # size=61440 00:38:24.184 02:08:24 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:24.184 02:08:24 -- dd/common.sh@98 -- # xtrace_disable 00:38:24.184 02:08:24 -- common/autotest_common.sh@10 -- # set +x 00:38:24.751 02:08:24 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:38:24.751 02:08:24 -- dd/basic_rw.sh@30 -- # gen_conf 00:38:24.751 02:08:24 -- dd/common.sh@31 -- # xtrace_disable 00:38:24.751 02:08:24 -- common/autotest_common.sh@10 -- # set +x 00:38:24.751 { 00:38:24.751 "subsystems": [ 00:38:24.751 { 00:38:24.751 "subsystem": "bdev", 00:38:24.751 "config": [ 00:38:24.751 { 00:38:24.751 "params": { 00:38:24.751 "trtype": "pcie", 00:38:24.751 "traddr": "0000:00:10.0", 00:38:24.751 "name": "Nvme0" 00:38:24.751 }, 00:38:24.751 "method": "bdev_nvme_attach_controller" 00:38:24.751 }, 00:38:24.751 { 00:38:24.751 "method": "bdev_wait_for_examine" 00:38:24.751 } 00:38:24.751 ] 00:38:24.751 } 00:38:24.751 ] 00:38:24.751 } 00:38:24.751 [2024-04-24 02:08:24.766532] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:24.751 [2024-04-24 02:08:24.766840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142840 ] 00:38:25.058 [2024-04-24 02:08:24.924344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.338 [2024-04-24 02:08:25.155191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.971  Copying: 60/60 [kB] (average 19 MBps) 00:38:26.971 00:38:26.971 02:08:27 -- dd/basic_rw.sh@37 -- # gen_conf 00:38:26.971 02:08:27 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:38:26.971 02:08:27 -- dd/common.sh@31 -- # xtrace_disable 00:38:26.971 02:08:27 -- common/autotest_common.sh@10 -- # set +x 00:38:27.228 { 00:38:27.228 "subsystems": [ 00:38:27.228 { 00:38:27.228 "subsystem": "bdev", 00:38:27.228 "config": [ 00:38:27.228 { 00:38:27.228 "params": { 00:38:27.228 "trtype": "pcie", 00:38:27.228 "traddr": "0000:00:10.0", 00:38:27.228 "name": "Nvme0" 00:38:27.228 }, 00:38:27.228 "method": "bdev_nvme_attach_controller" 00:38:27.228 }, 00:38:27.228 { 00:38:27.228 "method": "bdev_wait_for_examine" 00:38:27.228 } 00:38:27.228 ] 00:38:27.228 } 00:38:27.228 ] 00:38:27.228 } 00:38:27.228 [2024-04-24 02:08:27.126759] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:27.228 [2024-04-24 02:08:27.127169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142874 ] 00:38:27.228 [2024-04-24 02:08:27.308556] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.795 [2024-04-24 02:08:27.601337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.955  Copying: 60/60 [kB] (average 29 MBps) 00:38:29.955 00:38:29.955 02:08:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:29.955 02:08:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:29.955 02:08:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:29.955 02:08:29 -- dd/common.sh@11 -- # local nvme_ref= 00:38:29.955 02:08:29 -- dd/common.sh@12 -- # local size=61440 00:38:29.955 02:08:29 -- dd/common.sh@14 -- # local bs=1048576 00:38:29.955 02:08:29 -- dd/common.sh@15 -- # local count=1 00:38:29.955 02:08:29 -- dd/common.sh@18 -- # gen_conf 00:38:29.955 02:08:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:29.955 02:08:29 -- dd/common.sh@31 -- # xtrace_disable 00:38:29.955 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:38:29.955 { 00:38:29.955 "subsystems": [ 00:38:29.955 { 00:38:29.955 "subsystem": "bdev", 00:38:29.955 "config": [ 00:38:29.955 { 00:38:29.955 "params": { 00:38:29.955 "trtype": "pcie", 00:38:29.955 "traddr": "0000:00:10.0", 00:38:29.955 "name": "Nvme0" 00:38:29.955 }, 00:38:29.955 "method": "bdev_nvme_attach_controller" 00:38:29.955 }, 00:38:29.955 { 00:38:29.955 "method": "bdev_wait_for_examine" 00:38:29.955 } 00:38:29.955 ] 00:38:29.955 } 00:38:29.955 ] 00:38:29.955 } 00:38:29.955 [2024-04-24 02:08:29.701390] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:29.955 [2024-04-24 02:08:29.701815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142910 ] 00:38:29.955 [2024-04-24 02:08:29.881057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.213 [2024-04-24 02:08:30.123797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.171  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:32.171 00:38:32.171 02:08:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:32.171 02:08:31 -- dd/basic_rw.sh@23 -- # count=15 00:38:32.171 02:08:31 -- dd/basic_rw.sh@24 -- # count=15 00:38:32.171 02:08:31 -- dd/basic_rw.sh@25 -- # size=61440 00:38:32.171 02:08:31 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:32.171 02:08:31 -- dd/common.sh@98 -- # xtrace_disable 00:38:32.171 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:38:32.737 02:08:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:38:32.737 02:08:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:38:32.737 02:08:32 -- dd/common.sh@31 -- # xtrace_disable 00:38:32.737 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:38:32.737 { 00:38:32.737 "subsystems": [ 00:38:32.737 { 00:38:32.737 "subsystem": "bdev", 00:38:32.737 "config": [ 00:38:32.737 { 00:38:32.737 "params": { 00:38:32.737 "trtype": "pcie", 00:38:32.737 "traddr": "0000:00:10.0", 00:38:32.737 "name": "Nvme0" 00:38:32.737 }, 00:38:32.737 "method": "bdev_nvme_attach_controller" 00:38:32.737 }, 00:38:32.737 { 00:38:32.737 "method": "bdev_wait_for_examine" 00:38:32.737 } 00:38:32.737 ] 00:38:32.737 } 00:38:32.737 ] 00:38:32.737 } 00:38:32.737 [2024-04-24 02:08:32.671860] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:32.737 [2024-04-24 02:08:32.672468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142954 ] 00:38:32.997 [2024-04-24 02:08:32.851139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.256 [2024-04-24 02:08:33.147129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.228  Copying: 60/60 [kB] (average 58 MBps) 00:38:35.228 00:38:35.228 02:08:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:38:35.228 02:08:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:38:35.228 02:08:35 -- dd/common.sh@31 -- # xtrace_disable 00:38:35.228 02:08:35 -- common/autotest_common.sh@10 -- # set +x 00:38:35.228 { 00:38:35.228 "subsystems": [ 00:38:35.228 { 00:38:35.228 "subsystem": "bdev", 00:38:35.228 "config": [ 00:38:35.228 { 00:38:35.228 "params": { 00:38:35.228 "trtype": "pcie", 00:38:35.228 "traddr": "0000:00:10.0", 00:38:35.228 "name": "Nvme0" 00:38:35.228 }, 00:38:35.228 "method": "bdev_nvme_attach_controller" 00:38:35.228 }, 00:38:35.228 { 00:38:35.228 "method": "bdev_wait_for_examine" 00:38:35.228 } 00:38:35.228 ] 00:38:35.228 } 00:38:35.228 ] 00:38:35.228 } 00:38:35.228 [2024-04-24 02:08:35.206819] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:35.228 [2024-04-24 02:08:35.207777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142985 ] 00:38:35.486 [2024-04-24 02:08:35.387772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.745 [2024-04-24 02:08:35.627074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.376  Copying: 60/60 [kB] (average 58 MBps) 00:38:37.376 00:38:37.634 02:08:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:37.634 02:08:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:37.634 02:08:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:37.634 02:08:37 -- dd/common.sh@11 -- # local nvme_ref= 00:38:37.634 02:08:37 -- dd/common.sh@12 -- # local size=61440 00:38:37.634 02:08:37 -- dd/common.sh@14 -- # local bs=1048576 00:38:37.634 02:08:37 -- dd/common.sh@15 -- # local count=1 00:38:37.634 02:08:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:37.634 02:08:37 -- dd/common.sh@18 -- # gen_conf 00:38:37.634 02:08:37 -- dd/common.sh@31 -- # xtrace_disable 00:38:37.634 02:08:37 -- common/autotest_common.sh@10 -- # set +x 00:38:37.634 { 00:38:37.634 "subsystems": [ 00:38:37.634 { 00:38:37.634 "subsystem": "bdev", 00:38:37.634 "config": [ 00:38:37.634 { 00:38:37.634 "params": { 00:38:37.634 "trtype": "pcie", 00:38:37.634 "traddr": "0000:00:10.0", 00:38:37.634 "name": "Nvme0" 00:38:37.634 }, 00:38:37.634 "method": "bdev_nvme_attach_controller" 00:38:37.634 }, 00:38:37.634 { 00:38:37.634 "method": "bdev_wait_for_examine" 00:38:37.634 } 00:38:37.634 ] 00:38:37.634 } 00:38:37.634 ] 00:38:37.634 } 00:38:37.634 [2024-04-24 02:08:37.557050] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:37.634 [2024-04-24 02:08:37.557957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143017 ] 00:38:37.895 [2024-04-24 02:08:37.760671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.158 [2024-04-24 02:08:38.058234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.163  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:40.163 00:38:40.163 02:08:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:40.163 02:08:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:40.163 02:08:40 -- dd/basic_rw.sh@23 -- # count=7 00:38:40.163 02:08:40 -- dd/basic_rw.sh@24 -- # count=7 00:38:40.163 02:08:40 -- dd/basic_rw.sh@25 -- # size=57344 00:38:40.163 02:08:40 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:40.163 02:08:40 -- dd/common.sh@98 -- # xtrace_disable 00:38:40.164 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:38:40.731 02:08:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:38:40.731 02:08:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:38:40.731 02:08:40 -- dd/common.sh@31 -- # xtrace_disable 00:38:40.731 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:38:40.731 { 00:38:40.731 "subsystems": [ 00:38:40.731 { 00:38:40.731 "subsystem": "bdev", 00:38:40.731 "config": [ 00:38:40.731 { 00:38:40.731 "params": { 00:38:40.731 "trtype": "pcie", 00:38:40.731 "traddr": "0000:00:10.0", 00:38:40.731 "name": "Nvme0" 00:38:40.731 }, 00:38:40.731 "method": "bdev_nvme_attach_controller" 00:38:40.731 }, 00:38:40.731 { 00:38:40.731 "method": "bdev_wait_for_examine" 00:38:40.731 } 00:38:40.731 ] 00:38:40.731 } 00:38:40.731 ] 00:38:40.731 } 00:38:40.731 [2024-04-24 02:08:40.770555] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:40.731 [2024-04-24 02:08:40.770855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143068 ] 00:38:40.989 [2024-04-24 02:08:40.929782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.247 [2024-04-24 02:08:41.156212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.252  Copying: 56/56 [kB] (average 54 MBps) 00:38:43.252 00:38:43.252 02:08:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:38:43.252 02:08:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:38:43.252 02:08:42 -- dd/common.sh@31 -- # xtrace_disable 00:38:43.252 02:08:42 -- common/autotest_common.sh@10 -- # set +x 00:38:43.252 { 00:38:43.252 "subsystems": [ 00:38:43.252 { 00:38:43.252 "subsystem": "bdev", 00:38:43.252 "config": [ 00:38:43.252 { 00:38:43.252 "params": { 00:38:43.252 "trtype": "pcie", 00:38:43.252 "traddr": "0000:00:10.0", 00:38:43.252 "name": "Nvme0" 00:38:43.252 }, 00:38:43.252 "method": "bdev_nvme_attach_controller" 00:38:43.252 }, 00:38:43.252 { 00:38:43.252 "method": "bdev_wait_for_examine" 00:38:43.252 } 00:38:43.252 ] 00:38:43.252 } 00:38:43.252 ] 00:38:43.252 } 00:38:43.252 [2024-04-24 02:08:43.050485] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:43.252 [2024-04-24 02:08:43.050891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143100 ] 00:38:43.252 [2024-04-24 02:08:43.228771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.510 [2024-04-24 02:08:43.455522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.453  Copying: 56/56 [kB] (average 54 MBps) 00:38:45.453 00:38:45.453 02:08:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:45.453 02:08:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:45.453 02:08:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:45.453 02:08:45 -- dd/common.sh@11 -- # local nvme_ref= 00:38:45.453 02:08:45 -- dd/common.sh@12 -- # local size=57344 00:38:45.453 02:08:45 -- dd/common.sh@14 -- # local bs=1048576 00:38:45.453 02:08:45 -- dd/common.sh@15 -- # local count=1 00:38:45.453 02:08:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:45.453 02:08:45 -- dd/common.sh@18 -- # gen_conf 00:38:45.453 02:08:45 -- dd/common.sh@31 -- # xtrace_disable 00:38:45.453 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:38:45.711 { 00:38:45.711 "subsystems": [ 00:38:45.711 { 00:38:45.711 "subsystem": "bdev", 00:38:45.711 "config": [ 00:38:45.711 { 00:38:45.711 "params": { 00:38:45.711 "trtype": "pcie", 00:38:45.711 "traddr": "0000:00:10.0", 00:38:45.711 "name": "Nvme0" 00:38:45.711 }, 00:38:45.711 "method": "bdev_nvme_attach_controller" 00:38:45.711 }, 00:38:45.711 { 00:38:45.711 "method": "bdev_wait_for_examine" 00:38:45.711 } 00:38:45.711 ] 00:38:45.711 } 00:38:45.711 ] 00:38:45.711 } 00:38:45.711 [2024-04-24 02:08:45.554661] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:45.711 [2024-04-24 02:08:45.554945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143132 ] 00:38:45.711 [2024-04-24 02:08:45.716691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.041 [2024-04-24 02:08:45.947283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.982  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:47.982 00:38:47.982 02:08:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:47.982 02:08:47 -- dd/basic_rw.sh@23 -- # count=7 00:38:47.982 02:08:47 -- dd/basic_rw.sh@24 -- # count=7 00:38:47.982 02:08:47 -- dd/basic_rw.sh@25 -- # size=57344 00:38:47.982 02:08:47 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:47.982 02:08:47 -- dd/common.sh@98 -- # xtrace_disable 00:38:47.982 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:38:48.548 02:08:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:38:48.548 02:08:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:38:48.548 02:08:48 -- dd/common.sh@31 -- # xtrace_disable 00:38:48.548 02:08:48 -- common/autotest_common.sh@10 -- # set +x 00:38:48.548 { 00:38:48.548 "subsystems": [ 00:38:48.548 { 00:38:48.548 "subsystem": "bdev", 00:38:48.548 "config": [ 00:38:48.548 { 00:38:48.548 "params": { 00:38:48.548 "trtype": "pcie", 00:38:48.548 "traddr": "0000:00:10.0", 00:38:48.548 "name": "Nvme0" 00:38:48.548 }, 00:38:48.548 "method": "bdev_nvme_attach_controller" 00:38:48.548 }, 00:38:48.548 { 00:38:48.548 "method": "bdev_wait_for_examine" 00:38:48.548 } 00:38:48.548 ] 00:38:48.548 } 00:38:48.548 ] 00:38:48.548 } 00:38:48.548 [2024-04-24 02:08:48.418102] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:48.548 [2024-04-24 02:08:48.418620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143175 ] 00:38:48.548 [2024-04-24 02:08:48.597913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.807 [2024-04-24 02:08:48.853240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.276  Copying: 56/56 [kB] (average 54 MBps) 00:38:51.276 00:38:51.276 02:08:50 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:38:51.276 02:08:50 -- dd/basic_rw.sh@37 -- # gen_conf 00:38:51.276 02:08:50 -- dd/common.sh@31 -- # xtrace_disable 00:38:51.276 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:38:51.276 { 00:38:51.276 "subsystems": [ 00:38:51.276 { 00:38:51.276 "subsystem": "bdev", 00:38:51.276 "config": [ 00:38:51.276 { 00:38:51.276 "params": { 00:38:51.276 "trtype": "pcie", 00:38:51.276 "traddr": "0000:00:10.0", 00:38:51.276 "name": "Nvme0" 00:38:51.276 }, 00:38:51.276 "method": "bdev_nvme_attach_controller" 00:38:51.276 }, 00:38:51.276 { 00:38:51.276 "method": "bdev_wait_for_examine" 00:38:51.276 } 00:38:51.276 ] 00:38:51.276 } 00:38:51.276 ] 00:38:51.276 } 00:38:51.276 [2024-04-24 02:08:50.982378] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:51.276 [2024-04-24 02:08:50.982701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143213 ] 00:38:51.276 [2024-04-24 02:08:51.144606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.533 [2024-04-24 02:08:51.363441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.166  Copying: 56/56 [kB] (average 54 MBps) 00:38:53.166 00:38:53.166 02:08:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:53.166 02:08:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:53.166 02:08:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:53.166 02:08:53 -- dd/common.sh@11 -- # local nvme_ref= 00:38:53.166 02:08:53 -- dd/common.sh@12 -- # local size=57344 00:38:53.166 02:08:53 -- dd/common.sh@14 -- # local bs=1048576 00:38:53.166 02:08:53 -- dd/common.sh@15 -- # local count=1 00:38:53.166 02:08:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:53.166 02:08:53 -- dd/common.sh@18 -- # gen_conf 00:38:53.166 02:08:53 -- dd/common.sh@31 -- # xtrace_disable 00:38:53.166 02:08:53 -- common/autotest_common.sh@10 -- # set +x 00:38:53.166 { 00:38:53.166 "subsystems": [ 00:38:53.166 { 00:38:53.166 "subsystem": "bdev", 00:38:53.166 "config": [ 00:38:53.166 { 00:38:53.166 "params": { 00:38:53.166 "trtype": "pcie", 00:38:53.166 "traddr": "0000:00:10.0", 00:38:53.166 "name": "Nvme0" 00:38:53.166 }, 00:38:53.166 "method": "bdev_nvme_attach_controller" 00:38:53.166 }, 00:38:53.166 { 00:38:53.166 "method": "bdev_wait_for_examine" 00:38:53.166 } 00:38:53.167 ] 00:38:53.167 } 00:38:53.167 ] 00:38:53.167 } 00:38:53.167 [2024-04-24 02:08:53.214145] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:53.167 [2024-04-24 02:08:53.214556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143246 ] 00:38:53.428 [2024-04-24 02:08:53.388747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.687 [2024-04-24 02:08:53.611507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.744  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:55.744 00:38:55.744 02:08:55 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:55.744 02:08:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:55.744 02:08:55 -- dd/basic_rw.sh@23 -- # count=3 00:38:55.744 02:08:55 -- dd/basic_rw.sh@24 -- # count=3 00:38:55.744 02:08:55 -- dd/basic_rw.sh@25 -- # size=49152 00:38:55.744 02:08:55 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:55.744 02:08:55 -- dd/common.sh@98 -- # xtrace_disable 00:38:55.744 02:08:55 -- common/autotest_common.sh@10 -- # set +x 00:38:56.002 02:08:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:38:56.002 02:08:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:38:56.002 02:08:55 -- dd/common.sh@31 -- # xtrace_disable 00:38:56.002 02:08:55 -- common/autotest_common.sh@10 -- # set +x 00:38:56.002 { 00:38:56.002 "subsystems": [ 00:38:56.002 { 00:38:56.002 "subsystem": "bdev", 00:38:56.002 "config": [ 00:38:56.002 { 00:38:56.002 "params": { 00:38:56.002 "trtype": "pcie", 00:38:56.002 "traddr": "0000:00:10.0", 00:38:56.002 "name": "Nvme0" 00:38:56.002 }, 00:38:56.002 "method": "bdev_nvme_attach_controller" 00:38:56.002 }, 00:38:56.002 { 00:38:56.002 "method": "bdev_wait_for_examine" 00:38:56.002 } 00:38:56.002 ] 00:38:56.002 } 00:38:56.002 ] 00:38:56.002 } 00:38:56.002 [2024-04-24 02:08:56.074287] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:56.002 [2024-04-24 02:08:56.075106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143279 ] 00:38:56.260 [2024-04-24 02:08:56.253691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.518 [2024-04-24 02:08:56.481429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.462  Copying: 48/48 [kB] (average 46 MBps) 00:38:58.462 00:38:58.462 02:08:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:38:58.462 02:08:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:38:58.462 02:08:58 -- dd/common.sh@31 -- # xtrace_disable 00:38:58.462 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:38:58.462 { 00:38:58.462 "subsystems": [ 00:38:58.462 { 00:38:58.462 "subsystem": "bdev", 00:38:58.462 "config": [ 00:38:58.462 { 00:38:58.462 "params": { 00:38:58.462 "trtype": "pcie", 00:38:58.462 "traddr": "0000:00:10.0", 00:38:58.462 "name": "Nvme0" 00:38:58.462 }, 00:38:58.462 "method": "bdev_nvme_attach_controller" 00:38:58.462 }, 00:38:58.462 { 00:38:58.462 "method": "bdev_wait_for_examine" 00:38:58.462 } 00:38:58.462 ] 00:38:58.462 } 00:38:58.462 ] 00:38:58.462 } 00:38:58.462 [2024-04-24 02:08:58.289188] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:38:58.462 [2024-04-24 02:08:58.289445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143319 ] 00:38:58.462 [2024-04-24 02:08:58.445061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.720 [2024-04-24 02:08:58.676418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.666  Copying: 48/48 [kB] (average 46 MBps) 00:39:00.666 00:39:00.666 02:09:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:00.666 02:09:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:39:00.666 02:09:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:00.666 02:09:00 -- dd/common.sh@11 -- # local nvme_ref= 00:39:00.666 02:09:00 -- dd/common.sh@12 -- # local size=49152 00:39:00.666 02:09:00 -- dd/common.sh@14 -- # local bs=1048576 00:39:00.666 02:09:00 -- dd/common.sh@15 -- # local count=1 00:39:00.666 02:09:00 -- dd/common.sh@18 -- # gen_conf 00:39:00.666 02:09:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:00.666 02:09:00 -- dd/common.sh@31 -- # xtrace_disable 00:39:00.666 02:09:00 -- common/autotest_common.sh@10 -- # set +x 00:39:00.666 { 00:39:00.666 "subsystems": [ 00:39:00.666 { 00:39:00.666 "subsystem": "bdev", 00:39:00.666 "config": [ 00:39:00.666 { 00:39:00.666 "params": { 00:39:00.666 "trtype": "pcie", 00:39:00.666 "traddr": "0000:00:10.0", 00:39:00.666 "name": "Nvme0" 00:39:00.666 }, 00:39:00.666 "method": "bdev_nvme_attach_controller" 00:39:00.666 }, 00:39:00.666 { 00:39:00.666 "method": "bdev_wait_for_examine" 00:39:00.666 } 00:39:00.666 ] 00:39:00.666 } 00:39:00.666 ] 00:39:00.666 } 00:39:00.666 [2024-04-24 02:09:00.636541] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:00.666 [2024-04-24 02:09:00.637161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143348 ] 00:39:00.924 [2024-04-24 02:09:00.796273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.183 [2024-04-24 02:09:01.020798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.814  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:02.814 00:39:02.814 02:09:02 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:02.814 02:09:02 -- dd/basic_rw.sh@23 -- # count=3 00:39:02.814 02:09:02 -- dd/basic_rw.sh@24 -- # count=3 00:39:02.814 02:09:02 -- dd/basic_rw.sh@25 -- # size=49152 00:39:02.814 02:09:02 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:39:02.814 02:09:02 -- dd/common.sh@98 -- # xtrace_disable 00:39:02.814 02:09:02 -- common/autotest_common.sh@10 -- # set +x 00:39:03.380 02:09:03 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:39:03.380 02:09:03 -- dd/basic_rw.sh@30 -- # gen_conf 00:39:03.380 02:09:03 -- dd/common.sh@31 -- # xtrace_disable 00:39:03.380 02:09:03 -- common/autotest_common.sh@10 -- # set +x 00:39:03.380 { 00:39:03.380 "subsystems": [ 00:39:03.380 { 00:39:03.380 "subsystem": "bdev", 00:39:03.380 "config": [ 00:39:03.380 { 00:39:03.380 "params": { 00:39:03.380 "trtype": "pcie", 00:39:03.380 "traddr": "0000:00:10.0", 00:39:03.380 "name": "Nvme0" 00:39:03.380 }, 00:39:03.380 "method": "bdev_nvme_attach_controller" 00:39:03.380 }, 00:39:03.380 { 00:39:03.380 "method": "bdev_wait_for_examine" 00:39:03.380 } 00:39:03.380 ] 00:39:03.380 } 00:39:03.380 ] 00:39:03.380 } 00:39:03.380 [2024-04-24 02:09:03.289744] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:03.380 [2024-04-24 02:09:03.290155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143387 ] 00:39:03.639 [2024-04-24 02:09:03.469899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.639 [2024-04-24 02:09:03.694316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.580  Copying: 48/48 [kB] (average 46 MBps) 00:39:05.580 00:39:05.580 02:09:05 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:39:05.580 02:09:05 -- dd/basic_rw.sh@37 -- # gen_conf 00:39:05.580 02:09:05 -- dd/common.sh@31 -- # xtrace_disable 00:39:05.580 02:09:05 -- common/autotest_common.sh@10 -- # set +x 00:39:05.580 { 00:39:05.580 "subsystems": [ 00:39:05.580 { 00:39:05.580 "subsystem": "bdev", 00:39:05.580 "config": [ 00:39:05.580 { 00:39:05.580 "params": { 00:39:05.580 "trtype": "pcie", 00:39:05.580 "traddr": "0000:00:10.0", 00:39:05.580 "name": "Nvme0" 00:39:05.580 }, 00:39:05.580 "method": "bdev_nvme_attach_controller" 00:39:05.580 }, 00:39:05.580 { 00:39:05.580 "method": "bdev_wait_for_examine" 00:39:05.580 } 00:39:05.580 ] 00:39:05.580 } 00:39:05.580 ] 00:39:05.580 } 00:39:05.580 [2024-04-24 02:09:05.647508] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:05.580 [2024-04-24 02:09:05.648492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143419 ] 00:39:05.838 [2024-04-24 02:09:05.828202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.096 [2024-04-24 02:09:06.050913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.056  Copying: 48/48 [kB] (average 46 MBps) 00:39:08.056 00:39:08.056 02:09:07 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:08.056 02:09:07 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:39:08.056 02:09:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:08.056 02:09:07 -- dd/common.sh@11 -- # local nvme_ref= 00:39:08.056 02:09:07 -- dd/common.sh@12 -- # local size=49152 00:39:08.056 02:09:07 -- dd/common.sh@14 -- # local bs=1048576 00:39:08.056 02:09:07 -- dd/common.sh@15 -- # local count=1 00:39:08.056 02:09:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:08.056 02:09:07 -- dd/common.sh@18 -- # gen_conf 00:39:08.056 02:09:07 -- dd/common.sh@31 -- # xtrace_disable 00:39:08.056 02:09:07 -- common/autotest_common.sh@10 -- # set +x 00:39:08.056 { 00:39:08.056 "subsystems": [ 00:39:08.056 { 00:39:08.056 "subsystem": "bdev", 00:39:08.056 "config": [ 00:39:08.056 { 00:39:08.056 "params": { 00:39:08.056 "trtype": "pcie", 00:39:08.056 "traddr": "0000:00:10.0", 00:39:08.056 "name": "Nvme0" 00:39:08.056 }, 00:39:08.056 "method": "bdev_nvme_attach_controller" 00:39:08.056 }, 00:39:08.056 { 00:39:08.056 "method": "bdev_wait_for_examine" 00:39:08.056 } 00:39:08.056 ] 00:39:08.056 } 00:39:08.056 ] 00:39:08.056 } 00:39:08.056 [2024-04-24 02:09:07.909750] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:08.056 [2024-04-24 02:09:07.910183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143454 ] 00:39:08.056 [2024-04-24 02:09:08.088694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.315 [2024-04-24 02:09:08.319670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.783  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:10.784 00:39:10.784 ************************************ 00:39:10.784 END TEST dd_rw 00:39:10.784 ************************************ 00:39:10.784 00:39:10.784 real 0m46.193s 00:39:10.784 user 0m39.913s 00:39:10.784 sys 0m4.944s 00:39:10.784 02:09:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:10.784 02:09:10 -- common/autotest_common.sh@10 -- # set +x 00:39:10.784 02:09:10 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:39:10.784 02:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:10.784 02:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:10.784 02:09:10 -- common/autotest_common.sh@10 -- # set +x 00:39:10.784 ************************************ 00:39:10.784 START TEST dd_rw_offset 00:39:10.784 ************************************ 00:39:10.784 02:09:10 -- common/autotest_common.sh@1111 -- # basic_offset 00:39:10.784 02:09:10 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:39:10.784 02:09:10 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:39:10.784 02:09:10 -- dd/common.sh@98 -- # xtrace_disable 00:39:10.784 02:09:10 -- common/autotest_common.sh@10 -- # set +x 00:39:10.784 02:09:10 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:39:10.784 02:09:10 -- dd/basic_rw.sh@56 -- # data=nithpkedidagjbegaqrhk2t44zr82zalu2yb70miemcolll7darnkp02f92gz0eifymecm8ndij5hqmz4ud54mt4p0u8n5s9e7i4eug2vtk9r3k4ozp7a4fgnw5q4aqv5atmoqa3tavwdzmqbopsodaaiswpp5i8tpfje2bqk67gwipstg5zp2hxkqolo0fwtfbmvkd51gk9butr921k3t6vr9c7f33uenrp4zk9vjn317ar61y9e3c0klg6sieb3nsiz4gx9t5rqtzkpvpxpazj01azy3oryodo1pnt8du7xnthjza0gntktbco5oqkrwckdkfirxu86jdux5a83whrffmxox2k7vxss7pbfnjhgqyejm1ds14kfna7bmy5sa5ly6digon5lizm5jdw8dgytmpp90gsg252z7spwu5texmg1sd9uil5fk2s0vtsfokeqxmfm0vapo1e4eoqnyxl7ypzsfjfpckvyjukpu0ujvtfad1fpmyd9ljhthtbtt3i3ugnpbv4ihazrdm8s3kryzatmlpae3sf39dijesxfrcrc1io1v7yp3vy6i8sryder0vlpnbgq74iqc1gvk99ndv4s3e902knmbqy6qb9c16cf0swaf0vut8y0i2fuc7u9k4i7j3qiqgpoenxstbpw0jeuimwp9qqgvbrqomkc9wmuzf7pau3z8a8vtd3o3eqzs7mozzjjeyfquz0b8zgj6rogpugkbzlp00nocis422jbzgnqk5tnvwm2r5j56ff8okfx5eyxaksxgve06cuzlsd8hi7gsij3l1qiyi3jqv95g5losbnuqbi99znbybtk3wd0d9r8z6qp03wvw5ayq9133ufz8xjjuy1qclidf4q5tgsqeyv45aylamolresdf4l4a7sq1h8ud15994nnlahcxcq6bggh3x9sba858vreek2t9w7zhm3k07zh5m1hd7eowemkmq7xtmvl5kiyuehfvk6glaghm1zgw2tri0xg7huc0r0iyy627cxrm3z4231e0reoh212g638jdh67f3pgxxxqcxvxcsxbxtc0gibm8iya1zyiexmsertybq0zvqvaxh1fcve7hgkvnllglaf2pvzrzd6yq090vylt105kgc7i779avb602mke8ri86zj541ipj52wzeqyp5vq8wcjb7mty2kej3opwqz1uyqff1qd65282vcuwb4aefd185ntb6voaeb5wg06v9cw607blf36fwkt9ewv0vts7m27496xjz0qxt9r0f3fsh0ox9emndcfiyr1hz1hxgk85p6au9oz6ez3vuosfqz3omvv3gt62rxvhg4r7vu0saexwwraia9hfqmwpa3evdb20z7dz84sqnomgnwd5unm1qlpro7itmduk4zutoxcun8exudc2szoo69dubzjjpv3pf0du4o5bmbvl4thpci2eq4lhmo31q20yut85qe5imq3aodyct5sktzimz8x9kyc0jrqfl0rtve5mczyytc7yhmqetz34zw2pgkoq7rl1zosmgi5pzpblxsa4job3iwtdx71vnsmogb3ar0c3xwyy4m1w977kejsuf19ilaqwhfd772wr89llqi3yr4ss0jnjifjcijrvxambcf0zcu9y9eg3f778ubdbs731t8nmhrphhbnmw01c1lf6yc6mfu7x871tr4oamx6rh9aqgwq42jsf4e401teq3hsrx474c3m9z0oa3ws9eglr71nuzl163myqhdf9dp3a0gy1aj97xfwzx6e4hb9p4w7sy6mgodle0mhir4kz4vpw4bkrez2l15vhwjqcerpssetjw23ek6wl2uwc24hwqnr8ju86py6ljs9llt6crboh6pedgfbjaewzys8slma6lcykxyxsyxwqpb5ayxhk1xjk3q6zgpos70m76sopurd25azbiu3prkptsw3i348d832zi0hzrcju3g8vp1yrti1fpxwlf727ikhs3vaifhgxzgxs4lq3l9a4ese9jcmzlydgjnhuqrh1n1h50e3wz4srjsapecdppmtd9b40i4pi3gj0w37cceibhaano0f50xzkrvmuy8pksnhnyj0h26dn5uz1oqizhnsx68sp79iheq0mapw87iwpbf1ksek6r72nx8tlglkobxa1mc4lxeh8f483s2waqaxe0zreoz541r6nkoexdyhvo6b4mjisyrvt2p28yyd2rt6ukeohpdcgbs6n6e5et9bvqylmnsvc9aqg0mty0mg184oyufltsinrfqfcntrv9crb6bepjdgm8vmnwt0c2exvgl7qjq2t2szafe6ztoq8sed9jngiwismig6xbkq2mn7iesoeqf6306aaykzjlqui2sjze7fikv2h1trk0dd8jgzh14fk4tzff4532z0zy4fjr8ho91br8c8uw3zd2qvlh12ougsbhyut0tae1e7hzx9z2wpfq03nu5b6e01q13guo14o6qrwp53pd0x2dt44wf154pue9o2lfhnmvw3m88i8pdwvmkfax8fbor7o2fk7rv1qq42lkq31h1zu3f3qmul6vvhizrkcsuvu90ccqbioephldr2lfw6rts2tr449unnbjavb4zrk92yicewt65sjgfnspb2quxmru4zyznoyeq0cnkwfy88bm6n3jase19z6m27t4zv8h334cvu7hqx7z1klnb5fgkqmr1xp56vex25fpwz9bsbl93eio80e4pun8rkjh21isn7iyn7z1bson3morulqqhmy2m40gx12x0ol48ur10ael4ve7qqw10s3wg7y85dk9d85i7bgwo75a4z8yq8g4urfv6lp37giuspbzy9bxne142zi2gmg8s6nf6dibj5j2qplv8c1byi0zrk0jlpocps7gkde0bs2dn08qukf39erzcffx358sx46she3qbavv11blkk3oxq8hpbmkkqfg90dtxjb8jy5u68qlbtkuplmmfoamjxvvwrr2ggsn08qy3xcanvnwwl4h0ssie7jrp4b4lrtjz47lzelru6xx8islfig2gntuplem1sdxgv52svarnj97f9j3pffa67mm88hxhzo0sobdkx37sy3i9pag1kujddsrr3gtj9ww9gy7j7vordxrzynqpvaxdp66qkqlhdvqj8q8vxku8bc56zlnr3nmqzwn8q5o0mgi3xf4xeda7egv9zrw021pf2888hk88czuk0junhbjopiu5s1lsre32h2ttonjb900itj0dyv55obi2jt3z9d1qzlg0oibxdiuw19ro3kh6uvtxvcpl8lhb3iget98rmmvtstlf4d3w4a2ktnay1ewmwwtfyvtlz70atqudyvb7lkpanznsn3sfhmbxlrlvz44demdj8z4uaoovavi2feqmo3yu4f5zqwaqnca4pzadzay9ez8pdo05l30ou6kla65ip47hqli0jn3gfh76nm8fm1truh0wg7g0fgpe45ddxa5ygvs5ynblvcrlrx7gxtb57af5n9gfpsnp9s26emg8nxjcbtamrkdlix01ga0huh2c8vtcq872dpsmjdmrg9iyu7nigrmjt23lzlxxvjl444swztzop0l3t4e7y7w63whufjnighdn6x32pbrii25p0oqkwbsvtuult5zntfh3mm3wtlqzvycrwpec2qnl0f5i1k1ztihy4r1abvdnz74cumd4phc32nm2aw2lu8wo1xd4cyzwcrb9j28l0qydhh4z3bs1p695p27ymmmgnvm77w3rw5qwvb5ypplt12jaz94g3wfnmyoykycx1gvsfgr5ylit7sbta2b3vzcquo3ikgixgbih7p6oop29sxalna1jg49k0dz8oh5vwmqr2kvol4qhe37s21vphkvjdgxe13lyc7pswp2ywfw49gze86voevvp8jdxbds9x18o6lmzepmkuzcl6ctmslh2wgv035dwo6vhd26tm79mn7er1v43zvd7uhleqqpvm9id8iyln4o61a1oyyntm70p7gqzabir3xdylx0z7o0jsk5z4vo8s0rzkawuc49369tizzcjogyxebc7vt5nnaka3g3 00:39:10.784 02:09:10 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:39:10.784 02:09:10 -- dd/basic_rw.sh@59 -- # gen_conf 00:39:10.784 02:09:10 -- dd/common.sh@31 -- # xtrace_disable 00:39:10.784 02:09:10 -- common/autotest_common.sh@10 -- # set +x 00:39:10.784 { 00:39:10.784 "subsystems": [ 00:39:10.784 { 00:39:10.784 "subsystem": "bdev", 00:39:10.784 "config": [ 00:39:10.784 { 00:39:10.784 "params": { 00:39:10.784 "trtype": "pcie", 00:39:10.784 "traddr": "0000:00:10.0", 00:39:10.784 "name": "Nvme0" 00:39:10.784 }, 00:39:10.784 "method": "bdev_nvme_attach_controller" 00:39:10.784 }, 00:39:10.784 { 00:39:10.784 "method": "bdev_wait_for_examine" 00:39:10.784 } 00:39:10.784 ] 00:39:10.784 } 00:39:10.784 ] 00:39:10.784 } 00:39:10.784 [2024-04-24 02:09:10.588775] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:10.784 [2024-04-24 02:09:10.588932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143517 ] 00:39:10.784 [2024-04-24 02:09:10.756976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.041 [2024-04-24 02:09:11.009687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.981  Copying: 4096/4096 [B] (average 4000 kBps) 00:39:12.981 00:39:12.981 02:09:12 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:39:12.981 02:09:12 -- dd/basic_rw.sh@65 -- # gen_conf 00:39:12.981 02:09:12 -- dd/common.sh@31 -- # xtrace_disable 00:39:12.981 02:09:12 -- common/autotest_common.sh@10 -- # set +x 00:39:12.981 [2024-04-24 02:09:12.866472] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:12.981 [2024-04-24 02:09:12.866617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143552 ] 00:39:12.981 { 00:39:12.981 "subsystems": [ 00:39:12.981 { 00:39:12.981 "subsystem": "bdev", 00:39:12.981 "config": [ 00:39:12.981 { 00:39:12.981 "params": { 00:39:12.981 "trtype": "pcie", 00:39:12.981 "traddr": "0000:00:10.0", 00:39:12.981 "name": "Nvme0" 00:39:12.981 }, 00:39:12.981 "method": "bdev_nvme_attach_controller" 00:39:12.981 }, 00:39:12.981 { 00:39:12.981 "method": "bdev_wait_for_examine" 00:39:12.981 } 00:39:12.981 ] 00:39:12.981 } 00:39:12.981 ] 00:39:12.981 } 00:39:12.981 [2024-04-24 02:09:13.024532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.238 [2024-04-24 02:09:13.260272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.180  Copying: 4096/4096 [B] (average 4000 kBps) 00:39:15.180 00:39:15.180 02:09:15 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:39:15.180 02:09:15 -- dd/basic_rw.sh@72 -- # [[ nithpkedidagjbegaqrhk2t44zr82zalu2yb70miemcolll7darnkp02f92gz0eifymecm8ndij5hqmz4ud54mt4p0u8n5s9e7i4eug2vtk9r3k4ozp7a4fgnw5q4aqv5atmoqa3tavwdzmqbopsodaaiswpp5i8tpfje2bqk67gwipstg5zp2hxkqolo0fwtfbmvkd51gk9butr921k3t6vr9c7f33uenrp4zk9vjn317ar61y9e3c0klg6sieb3nsiz4gx9t5rqtzkpvpxpazj01azy3oryodo1pnt8du7xnthjza0gntktbco5oqkrwckdkfirxu86jdux5a83whrffmxox2k7vxss7pbfnjhgqyejm1ds14kfna7bmy5sa5ly6digon5lizm5jdw8dgytmpp90gsg252z7spwu5texmg1sd9uil5fk2s0vtsfokeqxmfm0vapo1e4eoqnyxl7ypzsfjfpckvyjukpu0ujvtfad1fpmyd9ljhthtbtt3i3ugnpbv4ihazrdm8s3kryzatmlpae3sf39dijesxfrcrc1io1v7yp3vy6i8sryder0vlpnbgq74iqc1gvk99ndv4s3e902knmbqy6qb9c16cf0swaf0vut8y0i2fuc7u9k4i7j3qiqgpoenxstbpw0jeuimwp9qqgvbrqomkc9wmuzf7pau3z8a8vtd3o3eqzs7mozzjjeyfquz0b8zgj6rogpugkbzlp00nocis422jbzgnqk5tnvwm2r5j56ff8okfx5eyxaksxgve06cuzlsd8hi7gsij3l1qiyi3jqv95g5losbnuqbi99znbybtk3wd0d9r8z6qp03wvw5ayq9133ufz8xjjuy1qclidf4q5tgsqeyv45aylamolresdf4l4a7sq1h8ud15994nnlahcxcq6bggh3x9sba858vreek2t9w7zhm3k07zh5m1hd7eowemkmq7xtmvl5kiyuehfvk6glaghm1zgw2tri0xg7huc0r0iyy627cxrm3z4231e0reoh212g638jdh67f3pgxxxqcxvxcsxbxtc0gibm8iya1zyiexmsertybq0zvqvaxh1fcve7hgkvnllglaf2pvzrzd6yq090vylt105kgc7i779avb602mke8ri86zj541ipj52wzeqyp5vq8wcjb7mty2kej3opwqz1uyqff1qd65282vcuwb4aefd185ntb6voaeb5wg06v9cw607blf36fwkt9ewv0vts7m27496xjz0qxt9r0f3fsh0ox9emndcfiyr1hz1hxgk85p6au9oz6ez3vuosfqz3omvv3gt62rxvhg4r7vu0saexwwraia9hfqmwpa3evdb20z7dz84sqnomgnwd5unm1qlpro7itmduk4zutoxcun8exudc2szoo69dubzjjpv3pf0du4o5bmbvl4thpci2eq4lhmo31q20yut85qe5imq3aodyct5sktzimz8x9kyc0jrqfl0rtve5mczyytc7yhmqetz34zw2pgkoq7rl1zosmgi5pzpblxsa4job3iwtdx71vnsmogb3ar0c3xwyy4m1w977kejsuf19ilaqwhfd772wr89llqi3yr4ss0jnjifjcijrvxambcf0zcu9y9eg3f778ubdbs731t8nmhrphhbnmw01c1lf6yc6mfu7x871tr4oamx6rh9aqgwq42jsf4e401teq3hsrx474c3m9z0oa3ws9eglr71nuzl163myqhdf9dp3a0gy1aj97xfwzx6e4hb9p4w7sy6mgodle0mhir4kz4vpw4bkrez2l15vhwjqcerpssetjw23ek6wl2uwc24hwqnr8ju86py6ljs9llt6crboh6pedgfbjaewzys8slma6lcykxyxsyxwqpb5ayxhk1xjk3q6zgpos70m76sopurd25azbiu3prkptsw3i348d832zi0hzrcju3g8vp1yrti1fpxwlf727ikhs3vaifhgxzgxs4lq3l9a4ese9jcmzlydgjnhuqrh1n1h50e3wz4srjsapecdppmtd9b40i4pi3gj0w37cceibhaano0f50xzkrvmuy8pksnhnyj0h26dn5uz1oqizhnsx68sp79iheq0mapw87iwpbf1ksek6r72nx8tlglkobxa1mc4lxeh8f483s2waqaxe0zreoz541r6nkoexdyhvo6b4mjisyrvt2p28yyd2rt6ukeohpdcgbs6n6e5et9bvqylmnsvc9aqg0mty0mg184oyufltsinrfqfcntrv9crb6bepjdgm8vmnwt0c2exvgl7qjq2t2szafe6ztoq8sed9jngiwismig6xbkq2mn7iesoeqf6306aaykzjlqui2sjze7fikv2h1trk0dd8jgzh14fk4tzff4532z0zy4fjr8ho91br8c8uw3zd2qvlh12ougsbhyut0tae1e7hzx9z2wpfq03nu5b6e01q13guo14o6qrwp53pd0x2dt44wf154pue9o2lfhnmvw3m88i8pdwvmkfax8fbor7o2fk7rv1qq42lkq31h1zu3f3qmul6vvhizrkcsuvu90ccqbioephldr2lfw6rts2tr449unnbjavb4zrk92yicewt65sjgfnspb2quxmru4zyznoyeq0cnkwfy88bm6n3jase19z6m27t4zv8h334cvu7hqx7z1klnb5fgkqmr1xp56vex25fpwz9bsbl93eio80e4pun8rkjh21isn7iyn7z1bson3morulqqhmy2m40gx12x0ol48ur10ael4ve7qqw10s3wg7y85dk9d85i7bgwo75a4z8yq8g4urfv6lp37giuspbzy9bxne142zi2gmg8s6nf6dibj5j2qplv8c1byi0zrk0jlpocps7gkde0bs2dn08qukf39erzcffx358sx46she3qbavv11blkk3oxq8hpbmkkqfg90dtxjb8jy5u68qlbtkuplmmfoamjxvvwrr2ggsn08qy3xcanvnwwl4h0ssie7jrp4b4lrtjz47lzelru6xx8islfig2gntuplem1sdxgv52svarnj97f9j3pffa67mm88hxhzo0sobdkx37sy3i9pag1kujddsrr3gtj9ww9gy7j7vordxrzynqpvaxdp66qkqlhdvqj8q8vxku8bc56zlnr3nmqzwn8q5o0mgi3xf4xeda7egv9zrw021pf2888hk88czuk0junhbjopiu5s1lsre32h2ttonjb900itj0dyv55obi2jt3z9d1qzlg0oibxdiuw19ro3kh6uvtxvcpl8lhb3iget98rmmvtstlf4d3w4a2ktnay1ewmwwtfyvtlz70atqudyvb7lkpanznsn3sfhmbxlrlvz44demdj8z4uaoovavi2feqmo3yu4f5zqwaqnca4pzadzay9ez8pdo05l30ou6kla65ip47hqli0jn3gfh76nm8fm1truh0wg7g0fgpe45ddxa5ygvs5ynblvcrlrx7gxtb57af5n9gfpsnp9s26emg8nxjcbtamrkdlix01ga0huh2c8vtcq872dpsmjdmrg9iyu7nigrmjt23lzlxxvjl444swztzop0l3t4e7y7w63whufjnighdn6x32pbrii25p0oqkwbsvtuult5zntfh3mm3wtlqzvycrwpec2qnl0f5i1k1ztihy4r1abvdnz74cumd4phc32nm2aw2lu8wo1xd4cyzwcrb9j28l0qydhh4z3bs1p695p27ymmmgnvm77w3rw5qwvb5ypplt12jaz94g3wfnmyoykycx1gvsfgr5ylit7sbta2b3vzcquo3ikgixgbih7p6oop29sxalna1jg49k0dz8oh5vwmqr2kvol4qhe37s21vphkvjdgxe13lyc7pswp2ywfw49gze86voevvp8jdxbds9x18o6lmzepmkuzcl6ctmslh2wgv035dwo6vhd26tm79mn7er1v43zvd7uhleqqpvm9id8iyln4o61a1oyyntm70p7gqzabir3xdylx0z7o0jsk5z4vo8s0rzkawuc49369tizzcjogyxebc7vt5nnaka3g3 == \n\i\t\h\p\k\e\d\i\d\a\g\j\b\e\g\a\q\r\h\k\2\t\4\4\z\r\8\2\z\a\l\u\2\y\b\7\0\m\i\e\m\c\o\l\l\l\7\d\a\r\n\k\p\0\2\f\9\2\g\z\0\e\i\f\y\m\e\c\m\8\n\d\i\j\5\h\q\m\z\4\u\d\5\4\m\t\4\p\0\u\8\n\5\s\9\e\7\i\4\e\u\g\2\v\t\k\9\r\3\k\4\o\z\p\7\a\4\f\g\n\w\5\q\4\a\q\v\5\a\t\m\o\q\a\3\t\a\v\w\d\z\m\q\b\o\p\s\o\d\a\a\i\s\w\p\p\5\i\8\t\p\f\j\e\2\b\q\k\6\7\g\w\i\p\s\t\g\5\z\p\2\h\x\k\q\o\l\o\0\f\w\t\f\b\m\v\k\d\5\1\g\k\9\b\u\t\r\9\2\1\k\3\t\6\v\r\9\c\7\f\3\3\u\e\n\r\p\4\z\k\9\v\j\n\3\1\7\a\r\6\1\y\9\e\3\c\0\k\l\g\6\s\i\e\b\3\n\s\i\z\4\g\x\9\t\5\r\q\t\z\k\p\v\p\x\p\a\z\j\0\1\a\z\y\3\o\r\y\o\d\o\1\p\n\t\8\d\u\7\x\n\t\h\j\z\a\0\g\n\t\k\t\b\c\o\5\o\q\k\r\w\c\k\d\k\f\i\r\x\u\8\6\j\d\u\x\5\a\8\3\w\h\r\f\f\m\x\o\x\2\k\7\v\x\s\s\7\p\b\f\n\j\h\g\q\y\e\j\m\1\d\s\1\4\k\f\n\a\7\b\m\y\5\s\a\5\l\y\6\d\i\g\o\n\5\l\i\z\m\5\j\d\w\8\d\g\y\t\m\p\p\9\0\g\s\g\2\5\2\z\7\s\p\w\u\5\t\e\x\m\g\1\s\d\9\u\i\l\5\f\k\2\s\0\v\t\s\f\o\k\e\q\x\m\f\m\0\v\a\p\o\1\e\4\e\o\q\n\y\x\l\7\y\p\z\s\f\j\f\p\c\k\v\y\j\u\k\p\u\0\u\j\v\t\f\a\d\1\f\p\m\y\d\9\l\j\h\t\h\t\b\t\t\3\i\3\u\g\n\p\b\v\4\i\h\a\z\r\d\m\8\s\3\k\r\y\z\a\t\m\l\p\a\e\3\s\f\3\9\d\i\j\e\s\x\f\r\c\r\c\1\i\o\1\v\7\y\p\3\v\y\6\i\8\s\r\y\d\e\r\0\v\l\p\n\b\g\q\7\4\i\q\c\1\g\v\k\9\9\n\d\v\4\s\3\e\9\0\2\k\n\m\b\q\y\6\q\b\9\c\1\6\c\f\0\s\w\a\f\0\v\u\t\8\y\0\i\2\f\u\c\7\u\9\k\4\i\7\j\3\q\i\q\g\p\o\e\n\x\s\t\b\p\w\0\j\e\u\i\m\w\p\9\q\q\g\v\b\r\q\o\m\k\c\9\w\m\u\z\f\7\p\a\u\3\z\8\a\8\v\t\d\3\o\3\e\q\z\s\7\m\o\z\z\j\j\e\y\f\q\u\z\0\b\8\z\g\j\6\r\o\g\p\u\g\k\b\z\l\p\0\0\n\o\c\i\s\4\2\2\j\b\z\g\n\q\k\5\t\n\v\w\m\2\r\5\j\5\6\f\f\8\o\k\f\x\5\e\y\x\a\k\s\x\g\v\e\0\6\c\u\z\l\s\d\8\h\i\7\g\s\i\j\3\l\1\q\i\y\i\3\j\q\v\9\5\g\5\l\o\s\b\n\u\q\b\i\9\9\z\n\b\y\b\t\k\3\w\d\0\d\9\r\8\z\6\q\p\0\3\w\v\w\5\a\y\q\9\1\3\3\u\f\z\8\x\j\j\u\y\1\q\c\l\i\d\f\4\q\5\t\g\s\q\e\y\v\4\5\a\y\l\a\m\o\l\r\e\s\d\f\4\l\4\a\7\s\q\1\h\8\u\d\1\5\9\9\4\n\n\l\a\h\c\x\c\q\6\b\g\g\h\3\x\9\s\b\a\8\5\8\v\r\e\e\k\2\t\9\w\7\z\h\m\3\k\0\7\z\h\5\m\1\h\d\7\e\o\w\e\m\k\m\q\7\x\t\m\v\l\5\k\i\y\u\e\h\f\v\k\6\g\l\a\g\h\m\1\z\g\w\2\t\r\i\0\x\g\7\h\u\c\0\r\0\i\y\y\6\2\7\c\x\r\m\3\z\4\2\3\1\e\0\r\e\o\h\2\1\2\g\6\3\8\j\d\h\6\7\f\3\p\g\x\x\x\q\c\x\v\x\c\s\x\b\x\t\c\0\g\i\b\m\8\i\y\a\1\z\y\i\e\x\m\s\e\r\t\y\b\q\0\z\v\q\v\a\x\h\1\f\c\v\e\7\h\g\k\v\n\l\l\g\l\a\f\2\p\v\z\r\z\d\6\y\q\0\9\0\v\y\l\t\1\0\5\k\g\c\7\i\7\7\9\a\v\b\6\0\2\m\k\e\8\r\i\8\6\z\j\5\4\1\i\p\j\5\2\w\z\e\q\y\p\5\v\q\8\w\c\j\b\7\m\t\y\2\k\e\j\3\o\p\w\q\z\1\u\y\q\f\f\1\q\d\6\5\2\8\2\v\c\u\w\b\4\a\e\f\d\1\8\5\n\t\b\6\v\o\a\e\b\5\w\g\0\6\v\9\c\w\6\0\7\b\l\f\3\6\f\w\k\t\9\e\w\v\0\v\t\s\7\m\2\7\4\9\6\x\j\z\0\q\x\t\9\r\0\f\3\f\s\h\0\o\x\9\e\m\n\d\c\f\i\y\r\1\h\z\1\h\x\g\k\8\5\p\6\a\u\9\o\z\6\e\z\3\v\u\o\s\f\q\z\3\o\m\v\v\3\g\t\6\2\r\x\v\h\g\4\r\7\v\u\0\s\a\e\x\w\w\r\a\i\a\9\h\f\q\m\w\p\a\3\e\v\d\b\2\0\z\7\d\z\8\4\s\q\n\o\m\g\n\w\d\5\u\n\m\1\q\l\p\r\o\7\i\t\m\d\u\k\4\z\u\t\o\x\c\u\n\8\e\x\u\d\c\2\s\z\o\o\6\9\d\u\b\z\j\j\p\v\3\p\f\0\d\u\4\o\5\b\m\b\v\l\4\t\h\p\c\i\2\e\q\4\l\h\m\o\3\1\q\2\0\y\u\t\8\5\q\e\5\i\m\q\3\a\o\d\y\c\t\5\s\k\t\z\i\m\z\8\x\9\k\y\c\0\j\r\q\f\l\0\r\t\v\e\5\m\c\z\y\y\t\c\7\y\h\m\q\e\t\z\3\4\z\w\2\p\g\k\o\q\7\r\l\1\z\o\s\m\g\i\5\p\z\p\b\l\x\s\a\4\j\o\b\3\i\w\t\d\x\7\1\v\n\s\m\o\g\b\3\a\r\0\c\3\x\w\y\y\4\m\1\w\9\7\7\k\e\j\s\u\f\1\9\i\l\a\q\w\h\f\d\7\7\2\w\r\8\9\l\l\q\i\3\y\r\4\s\s\0\j\n\j\i\f\j\c\i\j\r\v\x\a\m\b\c\f\0\z\c\u\9\y\9\e\g\3\f\7\7\8\u\b\d\b\s\7\3\1\t\8\n\m\h\r\p\h\h\b\n\m\w\0\1\c\1\l\f\6\y\c\6\m\f\u\7\x\8\7\1\t\r\4\o\a\m\x\6\r\h\9\a\q\g\w\q\4\2\j\s\f\4\e\4\0\1\t\e\q\3\h\s\r\x\4\7\4\c\3\m\9\z\0\o\a\3\w\s\9\e\g\l\r\7\1\n\u\z\l\1\6\3\m\y\q\h\d\f\9\d\p\3\a\0\g\y\1\a\j\9\7\x\f\w\z\x\6\e\4\h\b\9\p\4\w\7\s\y\6\m\g\o\d\l\e\0\m\h\i\r\4\k\z\4\v\p\w\4\b\k\r\e\z\2\l\1\5\v\h\w\j\q\c\e\r\p\s\s\e\t\j\w\2\3\e\k\6\w\l\2\u\w\c\2\4\h\w\q\n\r\8\j\u\8\6\p\y\6\l\j\s\9\l\l\t\6\c\r\b\o\h\6\p\e\d\g\f\b\j\a\e\w\z\y\s\8\s\l\m\a\6\l\c\y\k\x\y\x\s\y\x\w\q\p\b\5\a\y\x\h\k\1\x\j\k\3\q\6\z\g\p\o\s\7\0\m\7\6\s\o\p\u\r\d\2\5\a\z\b\i\u\3\p\r\k\p\t\s\w\3\i\3\4\8\d\8\3\2\z\i\0\h\z\r\c\j\u\3\g\8\v\p\1\y\r\t\i\1\f\p\x\w\l\f\7\2\7\i\k\h\s\3\v\a\i\f\h\g\x\z\g\x\s\4\l\q\3\l\9\a\4\e\s\e\9\j\c\m\z\l\y\d\g\j\n\h\u\q\r\h\1\n\1\h\5\0\e\3\w\z\4\s\r\j\s\a\p\e\c\d\p\p\m\t\d\9\b\4\0\i\4\p\i\3\g\j\0\w\3\7\c\c\e\i\b\h\a\a\n\o\0\f\5\0\x\z\k\r\v\m\u\y\8\p\k\s\n\h\n\y\j\0\h\2\6\d\n\5\u\z\1\o\q\i\z\h\n\s\x\6\8\s\p\7\9\i\h\e\q\0\m\a\p\w\8\7\i\w\p\b\f\1\k\s\e\k\6\r\7\2\n\x\8\t\l\g\l\k\o\b\x\a\1\m\c\4\l\x\e\h\8\f\4\8\3\s\2\w\a\q\a\x\e\0\z\r\e\o\z\5\4\1\r\6\n\k\o\e\x\d\y\h\v\o\6\b\4\m\j\i\s\y\r\v\t\2\p\2\8\y\y\d\2\r\t\6\u\k\e\o\h\p\d\c\g\b\s\6\n\6\e\5\e\t\9\b\v\q\y\l\m\n\s\v\c\9\a\q\g\0\m\t\y\0\m\g\1\8\4\o\y\u\f\l\t\s\i\n\r\f\q\f\c\n\t\r\v\9\c\r\b\6\b\e\p\j\d\g\m\8\v\m\n\w\t\0\c\2\e\x\v\g\l\7\q\j\q\2\t\2\s\z\a\f\e\6\z\t\o\q\8\s\e\d\9\j\n\g\i\w\i\s\m\i\g\6\x\b\k\q\2\m\n\7\i\e\s\o\e\q\f\6\3\0\6\a\a\y\k\z\j\l\q\u\i\2\s\j\z\e\7\f\i\k\v\2\h\1\t\r\k\0\d\d\8\j\g\z\h\1\4\f\k\4\t\z\f\f\4\5\3\2\z\0\z\y\4\f\j\r\8\h\o\9\1\b\r\8\c\8\u\w\3\z\d\2\q\v\l\h\1\2\o\u\g\s\b\h\y\u\t\0\t\a\e\1\e\7\h\z\x\9\z\2\w\p\f\q\0\3\n\u\5\b\6\e\0\1\q\1\3\g\u\o\1\4\o\6\q\r\w\p\5\3\p\d\0\x\2\d\t\4\4\w\f\1\5\4\p\u\e\9\o\2\l\f\h\n\m\v\w\3\m\8\8\i\8\p\d\w\v\m\k\f\a\x\8\f\b\o\r\7\o\2\f\k\7\r\v\1\q\q\4\2\l\k\q\3\1\h\1\z\u\3\f\3\q\m\u\l\6\v\v\h\i\z\r\k\c\s\u\v\u\9\0\c\c\q\b\i\o\e\p\h\l\d\r\2\l\f\w\6\r\t\s\2\t\r\4\4\9\u\n\n\b\j\a\v\b\4\z\r\k\9\2\y\i\c\e\w\t\6\5\s\j\g\f\n\s\p\b\2\q\u\x\m\r\u\4\z\y\z\n\o\y\e\q\0\c\n\k\w\f\y\8\8\b\m\6\n\3\j\a\s\e\1\9\z\6\m\2\7\t\4\z\v\8\h\3\3\4\c\v\u\7\h\q\x\7\z\1\k\l\n\b\5\f\g\k\q\m\r\1\x\p\5\6\v\e\x\2\5\f\p\w\z\9\b\s\b\l\9\3\e\i\o\8\0\e\4\p\u\n\8\r\k\j\h\2\1\i\s\n\7\i\y\n\7\z\1\b\s\o\n\3\m\o\r\u\l\q\q\h\m\y\2\m\4\0\g\x\1\2\x\0\o\l\4\8\u\r\1\0\a\e\l\4\v\e\7\q\q\w\1\0\s\3\w\g\7\y\8\5\d\k\9\d\8\5\i\7\b\g\w\o\7\5\a\4\z\8\y\q\8\g\4\u\r\f\v\6\l\p\3\7\g\i\u\s\p\b\z\y\9\b\x\n\e\1\4\2\z\i\2\g\m\g\8\s\6\n\f\6\d\i\b\j\5\j\2\q\p\l\v\8\c\1\b\y\i\0\z\r\k\0\j\l\p\o\c\p\s\7\g\k\d\e\0\b\s\2\d\n\0\8\q\u\k\f\3\9\e\r\z\c\f\f\x\3\5\8\s\x\4\6\s\h\e\3\q\b\a\v\v\1\1\b\l\k\k\3\o\x\q\8\h\p\b\m\k\k\q\f\g\9\0\d\t\x\j\b\8\j\y\5\u\6\8\q\l\b\t\k\u\p\l\m\m\f\o\a\m\j\x\v\v\w\r\r\2\g\g\s\n\0\8\q\y\3\x\c\a\n\v\n\w\w\l\4\h\0\s\s\i\e\7\j\r\p\4\b\4\l\r\t\j\z\4\7\l\z\e\l\r\u\6\x\x\8\i\s\l\f\i\g\2\g\n\t\u\p\l\e\m\1\s\d\x\g\v\5\2\s\v\a\r\n\j\9\7\f\9\j\3\p\f\f\a\6\7\m\m\8\8\h\x\h\z\o\0\s\o\b\d\k\x\3\7\s\y\3\i\9\p\a\g\1\k\u\j\d\d\s\r\r\3\g\t\j\9\w\w\9\g\y\7\j\7\v\o\r\d\x\r\z\y\n\q\p\v\a\x\d\p\6\6\q\k\q\l\h\d\v\q\j\8\q\8\v\x\k\u\8\b\c\5\6\z\l\n\r\3\n\m\q\z\w\n\8\q\5\o\0\m\g\i\3\x\f\4\x\e\d\a\7\e\g\v\9\z\r\w\0\2\1\p\f\2\8\8\8\h\k\8\8\c\z\u\k\0\j\u\n\h\b\j\o\p\i\u\5\s\1\l\s\r\e\3\2\h\2\t\t\o\n\j\b\9\0\0\i\t\j\0\d\y\v\5\5\o\b\i\2\j\t\3\z\9\d\1\q\z\l\g\0\o\i\b\x\d\i\u\w\1\9\r\o\3\k\h\6\u\v\t\x\v\c\p\l\8\l\h\b\3\i\g\e\t\9\8\r\m\m\v\t\s\t\l\f\4\d\3\w\4\a\2\k\t\n\a\y\1\e\w\m\w\w\t\f\y\v\t\l\z\7\0\a\t\q\u\d\y\v\b\7\l\k\p\a\n\z\n\s\n\3\s\f\h\m\b\x\l\r\l\v\z\4\4\d\e\m\d\j\8\z\4\u\a\o\o\v\a\v\i\2\f\e\q\m\o\3\y\u\4\f\5\z\q\w\a\q\n\c\a\4\p\z\a\d\z\a\y\9\e\z\8\p\d\o\0\5\l\3\0\o\u\6\k\l\a\6\5\i\p\4\7\h\q\l\i\0\j\n\3\g\f\h\7\6\n\m\8\f\m\1\t\r\u\h\0\w\g\7\g\0\f\g\p\e\4\5\d\d\x\a\5\y\g\v\s\5\y\n\b\l\v\c\r\l\r\x\7\g\x\t\b\5\7\a\f\5\n\9\g\f\p\s\n\p\9\s\2\6\e\m\g\8\n\x\j\c\b\t\a\m\r\k\d\l\i\x\0\1\g\a\0\h\u\h\2\c\8\v\t\c\q\8\7\2\d\p\s\m\j\d\m\r\g\9\i\y\u\7\n\i\g\r\m\j\t\2\3\l\z\l\x\x\v\j\l\4\4\4\s\w\z\t\z\o\p\0\l\3\t\4\e\7\y\7\w\6\3\w\h\u\f\j\n\i\g\h\d\n\6\x\3\2\p\b\r\i\i\2\5\p\0\o\q\k\w\b\s\v\t\u\u\l\t\5\z\n\t\f\h\3\m\m\3\w\t\l\q\z\v\y\c\r\w\p\e\c\2\q\n\l\0\f\5\i\1\k\1\z\t\i\h\y\4\r\1\a\b\v\d\n\z\7\4\c\u\m\d\4\p\h\c\3\2\n\m\2\a\w\2\l\u\8\w\o\1\x\d\4\c\y\z\w\c\r\b\9\j\2\8\l\0\q\y\d\h\h\4\z\3\b\s\1\p\6\9\5\p\2\7\y\m\m\m\g\n\v\m\7\7\w\3\r\w\5\q\w\v\b\5\y\p\p\l\t\1\2\j\a\z\9\4\g\3\w\f\n\m\y\o\y\k\y\c\x\1\g\v\s\f\g\r\5\y\l\i\t\7\s\b\t\a\2\b\3\v\z\c\q\u\o\3\i\k\g\i\x\g\b\i\h\7\p\6\o\o\p\2\9\s\x\a\l\n\a\1\j\g\4\9\k\0\d\z\8\o\h\5\v\w\m\q\r\2\k\v\o\l\4\q\h\e\3\7\s\2\1\v\p\h\k\v\j\d\g\x\e\1\3\l\y\c\7\p\s\w\p\2\y\w\f\w\4\9\g\z\e\8\6\v\o\e\v\v\p\8\j\d\x\b\d\s\9\x\1\8\o\6\l\m\z\e\p\m\k\u\z\c\l\6\c\t\m\s\l\h\2\w\g\v\0\3\5\d\w\o\6\v\h\d\2\6\t\m\7\9\m\n\7\e\r\1\v\4\3\z\v\d\7\u\h\l\e\q\q\p\v\m\9\i\d\8\i\y\l\n\4\o\6\1\a\1\o\y\y\n\t\m\7\0\p\7\g\q\z\a\b\i\r\3\x\d\y\l\x\0\z\7\o\0\j\s\k\5\z\4\v\o\8\s\0\r\z\k\a\w\u\c\4\9\3\6\9\t\i\z\z\c\j\o\g\y\x\e\b\c\7\v\t\5\n\n\a\k\a\3\g\3 ]] 00:39:15.180 00:39:15.180 real 0m4.721s 00:39:15.180 user 0m4.144s 00:39:15.180 sys 0m0.445s 00:39:15.180 02:09:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:15.180 02:09:15 -- common/autotest_common.sh@10 -- # set +x 00:39:15.180 ************************************ 00:39:15.180 END TEST dd_rw_offset 00:39:15.180 ************************************ 00:39:15.180 02:09:15 -- dd/basic_rw.sh@1 -- # cleanup 00:39:15.180 02:09:15 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:39:15.180 02:09:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:15.180 02:09:15 -- dd/common.sh@11 -- # local nvme_ref= 00:39:15.180 02:09:15 -- dd/common.sh@12 -- # local size=0xffff 00:39:15.180 02:09:15 -- dd/common.sh@14 -- # local bs=1048576 00:39:15.180 02:09:15 -- dd/common.sh@15 -- # local count=1 00:39:15.180 02:09:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:15.181 02:09:15 -- dd/common.sh@18 -- # gen_conf 00:39:15.181 02:09:15 -- dd/common.sh@31 -- # xtrace_disable 00:39:15.181 02:09:15 -- common/autotest_common.sh@10 -- # set +x 00:39:15.438 { 00:39:15.438 "subsystems": [ 00:39:15.438 { 00:39:15.438 "subsystem": "bdev", 00:39:15.438 "config": [ 00:39:15.438 { 00:39:15.438 "params": { 00:39:15.438 "trtype": "pcie", 00:39:15.438 "traddr": "0000:00:10.0", 00:39:15.438 "name": "Nvme0" 00:39:15.438 }, 00:39:15.438 "method": "bdev_nvme_attach_controller" 00:39:15.438 }, 00:39:15.438 { 00:39:15.438 "method": "bdev_wait_for_examine" 00:39:15.438 } 00:39:15.438 ] 00:39:15.438 } 00:39:15.438 ] 00:39:15.438 } 00:39:15.438 [2024-04-24 02:09:15.298886] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:15.438 [2024-04-24 02:09:15.299062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143599 ] 00:39:15.438 [2024-04-24 02:09:15.477283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.695 [2024-04-24 02:09:15.691368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.634  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:17.634 00:39:17.634 02:09:17 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:17.634 00:39:17.635 real 0m56.472s 00:39:17.635 user 0m48.569s 00:39:17.635 sys 0m6.247s 00:39:17.635 02:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:17.635 ************************************ 00:39:17.635 END TEST spdk_dd_basic_rw 00:39:17.635 ************************************ 00:39:17.635 02:09:17 -- common/autotest_common.sh@10 -- # set +x 00:39:17.635 02:09:17 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:39:17.635 02:09:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:17.635 02:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:17.635 02:09:17 -- common/autotest_common.sh@10 -- # set +x 00:39:17.635 ************************************ 00:39:17.635 START TEST spdk_dd_posix 00:39:17.635 ************************************ 00:39:17.635 02:09:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:39:17.892 * Looking for test storage... 00:39:17.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:17.892 02:09:17 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:17.892 02:09:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:17.892 02:09:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:17.892 02:09:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:17.892 02:09:17 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:17.892 02:09:17 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:17.892 02:09:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:17.892 02:09:17 -- paths/export.sh@5 -- # export PATH 00:39:17.893 02:09:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:17.893 02:09:17 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:39:17.893 02:09:17 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:39:17.893 02:09:17 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:39:17.893 02:09:17 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:39:17.893 02:09:17 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:17.893 02:09:17 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:17.893 02:09:17 -- dd/posix.sh@130 -- # tests 00:39:17.893 02:09:17 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:39:17.893 * First test run, using AIO 00:39:17.893 02:09:17 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:39:17.893 02:09:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:17.893 02:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:17.893 02:09:17 -- common/autotest_common.sh@10 -- # set +x 00:39:17.893 ************************************ 00:39:17.893 START TEST dd_flag_append 00:39:17.893 ************************************ 00:39:17.893 02:09:17 -- common/autotest_common.sh@1111 -- # append 00:39:17.893 02:09:17 -- dd/posix.sh@16 -- # local dump0 00:39:17.893 02:09:17 -- dd/posix.sh@17 -- # local dump1 00:39:17.893 02:09:17 -- dd/posix.sh@19 -- # gen_bytes 32 00:39:17.893 02:09:17 -- dd/common.sh@98 -- # xtrace_disable 00:39:17.893 02:09:17 -- common/autotest_common.sh@10 -- # set +x 00:39:17.893 02:09:17 -- dd/posix.sh@19 -- # dump0=eskzlu66mvpsoq5xnqbqr6wqrft4lnur 00:39:17.893 02:09:17 -- dd/posix.sh@20 -- # gen_bytes 32 00:39:17.893 02:09:17 -- dd/common.sh@98 -- # xtrace_disable 00:39:17.893 02:09:17 -- common/autotest_common.sh@10 -- # set +x 00:39:17.893 02:09:17 -- dd/posix.sh@20 -- # dump1=p62czzc25djhq6gijvngxiwv1n8uycyu 00:39:17.893 02:09:17 -- dd/posix.sh@22 -- # printf %s eskzlu66mvpsoq5xnqbqr6wqrft4lnur 00:39:17.893 02:09:17 -- dd/posix.sh@23 -- # printf %s p62czzc25djhq6gijvngxiwv1n8uycyu 00:39:17.893 02:09:17 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:39:17.893 [2024-04-24 02:09:17.888546] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:17.893 [2024-04-24 02:09:17.888731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143704 ] 00:39:18.151 [2024-04-24 02:09:18.069812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.409 [2024-04-24 02:09:18.288394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.118  Copying: 32/32 [B] (average 31 kBps) 00:39:20.118 00:39:20.118 02:09:20 -- dd/posix.sh@27 -- # [[ p62czzc25djhq6gijvngxiwv1n8uycyueskzlu66mvpsoq5xnqbqr6wqrft4lnur == \p\6\2\c\z\z\c\2\5\d\j\h\q\6\g\i\j\v\n\g\x\i\w\v\1\n\8\u\y\c\y\u\e\s\k\z\l\u\6\6\m\v\p\s\o\q\5\x\n\q\b\q\r\6\w\q\r\f\t\4\l\n\u\r ]] 00:39:20.118 00:39:20.118 real 0m2.358s 00:39:20.118 user 0m1.985s 00:39:20.118 sys 0m0.240s 00:39:20.118 02:09:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:20.118 02:09:20 -- common/autotest_common.sh@10 -- # set +x 00:39:20.118 ************************************ 00:39:20.118 END TEST dd_flag_append 00:39:20.118 ************************************ 00:39:20.376 02:09:20 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:39:20.376 02:09:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:20.376 02:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:20.376 02:09:20 -- common/autotest_common.sh@10 -- # set +x 00:39:20.376 ************************************ 00:39:20.376 START TEST dd_flag_directory 00:39:20.376 ************************************ 00:39:20.376 02:09:20 -- common/autotest_common.sh@1111 -- # directory 00:39:20.376 02:09:20 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:20.376 02:09:20 -- common/autotest_common.sh@638 -- # local es=0 00:39:20.377 02:09:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:20.377 02:09:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:20.377 02:09:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:20.377 02:09:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:20.377 02:09:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:20.377 02:09:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:20.377 02:09:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:20.377 02:09:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:20.377 02:09:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:20.377 02:09:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:20.377 [2024-04-24 02:09:20.356623] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:20.377 [2024-04-24 02:09:20.356812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143755 ] 00:39:20.635 [2024-04-24 02:09:20.537306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.894 [2024-04-24 02:09:20.767877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.152 [2024-04-24 02:09:21.142861] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:39:21.152 [2024-04-24 02:09:21.142958] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:39:21.152 [2024-04-24 02:09:21.142984] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:22.085 [2024-04-24 02:09:22.163194] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:22.652 02:09:22 -- common/autotest_common.sh@641 -- # es=236 00:39:22.652 02:09:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:39:22.652 02:09:22 -- common/autotest_common.sh@650 -- # es=108 00:39:22.652 02:09:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:39:22.652 02:09:22 -- common/autotest_common.sh@658 -- # es=1 00:39:22.652 02:09:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:39:22.652 02:09:22 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:39:22.652 02:09:22 -- common/autotest_common.sh@638 -- # local es=0 00:39:22.652 02:09:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:39:22.652 02:09:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:22.652 02:09:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:22.652 02:09:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:22.652 02:09:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:22.652 02:09:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:22.652 02:09:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:22.652 02:09:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:22.652 02:09:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:22.652 02:09:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:39:22.937 [2024-04-24 02:09:22.767442] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:22.937 [2024-04-24 02:09:22.767651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143795 ] 00:39:22.937 [2024-04-24 02:09:22.958095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.216 [2024-04-24 02:09:23.236279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.783 [2024-04-24 02:09:23.615367] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:39:23.783 [2024-04-24 02:09:23.615444] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:39:23.783 [2024-04-24 02:09:23.615474] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:24.717 [2024-04-24 02:09:24.542541] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:24.974 02:09:25 -- common/autotest_common.sh@641 -- # es=236 00:39:24.974 02:09:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:39:24.974 02:09:25 -- common/autotest_common.sh@650 -- # es=108 00:39:24.974 02:09:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:39:24.974 02:09:25 -- common/autotest_common.sh@658 -- # es=1 00:39:24.974 02:09:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:39:24.974 00:39:24.974 real 0m4.755s 00:39:24.974 user 0m4.066s 00:39:24.974 sys 0m0.489s 00:39:24.975 02:09:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:24.975 02:09:25 -- common/autotest_common.sh@10 -- # set +x 00:39:24.975 ************************************ 00:39:24.975 END TEST dd_flag_directory 00:39:24.975 ************************************ 00:39:25.233 02:09:25 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:39:25.233 02:09:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:25.233 02:09:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:25.233 02:09:25 -- common/autotest_common.sh@10 -- # set +x 00:39:25.233 ************************************ 00:39:25.233 START TEST dd_flag_nofollow 00:39:25.233 ************************************ 00:39:25.233 02:09:25 -- common/autotest_common.sh@1111 -- # nofollow 00:39:25.233 02:09:25 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:39:25.233 02:09:25 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:39:25.233 02:09:25 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:39:25.233 02:09:25 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:39:25.233 02:09:25 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:25.233 02:09:25 -- common/autotest_common.sh@638 -- # local es=0 00:39:25.233 02:09:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:25.233 02:09:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.233 02:09:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:25.233 02:09:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.233 02:09:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:25.233 02:09:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.233 02:09:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:25.233 02:09:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:25.233 02:09:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:25.233 02:09:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:25.233 [2024-04-24 02:09:25.231229] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:25.233 [2024-04-24 02:09:25.231532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143849 ] 00:39:25.492 [2024-04-24 02:09:25.417338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.750 [2024-04-24 02:09:25.711732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.317 [2024-04-24 02:09:26.093684] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:39:26.317 [2024-04-24 02:09:26.093769] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:39:26.317 [2024-04-24 02:09:26.093796] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:27.253 [2024-04-24 02:09:27.110774] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:27.820 02:09:27 -- common/autotest_common.sh@641 -- # es=216 00:39:27.820 02:09:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:39:27.820 02:09:27 -- common/autotest_common.sh@650 -- # es=88 00:39:27.820 02:09:27 -- common/autotest_common.sh@651 -- # case "$es" in 00:39:27.820 02:09:27 -- common/autotest_common.sh@658 -- # es=1 00:39:27.820 02:09:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:39:27.820 02:09:27 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:39:27.820 02:09:27 -- common/autotest_common.sh@638 -- # local es=0 00:39:27.820 02:09:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:39:27.820 02:09:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.820 02:09:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:27.820 02:09:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.820 02:09:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:27.820 02:09:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.820 02:09:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:39:27.820 02:09:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.820 02:09:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.820 02:09:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:39:27.820 [2024-04-24 02:09:27.696271] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:27.820 [2024-04-24 02:09:27.696396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143882 ] 00:39:27.820 [2024-04-24 02:09:27.855947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.079 [2024-04-24 02:09:28.078431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.649 [2024-04-24 02:09:28.469635] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:39:28.649 [2024-04-24 02:09:28.469729] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:39:28.649 [2024-04-24 02:09:28.469757] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:29.590 [2024-04-24 02:09:29.399842] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:29.849 02:09:29 -- common/autotest_common.sh@641 -- # es=216 00:39:29.849 02:09:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:39:29.849 02:09:29 -- common/autotest_common.sh@650 -- # es=88 00:39:29.849 02:09:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:39:29.849 02:09:29 -- common/autotest_common.sh@658 -- # es=1 00:39:29.849 02:09:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:39:29.849 02:09:29 -- dd/posix.sh@46 -- # gen_bytes 512 00:39:29.849 02:09:29 -- dd/common.sh@98 -- # xtrace_disable 00:39:29.849 02:09:29 -- common/autotest_common.sh@10 -- # set +x 00:39:29.849 02:09:29 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:30.108 [2024-04-24 02:09:29.948283] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:30.108 [2024-04-24 02:09:29.948409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143919 ] 00:39:30.108 [2024-04-24 02:09:30.109215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.367 [2024-04-24 02:09:30.345092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.365  Copying: 512/512 [B] (average 500 kBps) 00:39:32.365 00:39:32.365 02:09:32 -- dd/posix.sh@49 -- # [[ 2trs3v824w893qvyl35r92847o154i7a73zc60pokl1iho3dricklq92cwi5zp1zlerfol3tbx9ij3miv42vs4li7o5aq69tkjgrato84jm46u0rcgzxpd1ws9dfbexdhxbulv9w61c487gdebdwv1x7j4z63minldgmxl3vvlplg1tjc15zz7eadcgmourv6aq2ctq92l3upn35aodurw17lnj0h74bwn4m8qddo0nt9tqogea2hk5ap92i7wdfe63kj8w4dt23jazcrk315shoynkzkmp95fmlwe0tom57tvia3w1ajyqduhzbl9zf9cy9nxks09ezj94h0zikh305pgy5kx64yag2ynpzaauunyp54b056zpuags1vnx02ozsnfli8k0li462xb42pidubuij0uxx9dlmiacpa8h512h2ta29ie2023qewmvz1u9lhmbp2cytmdh1h99m63hjuoys8w76iyzde2uux7s1sd30naf5qtpb7qzbijey == \2\t\r\s\3\v\8\2\4\w\8\9\3\q\v\y\l\3\5\r\9\2\8\4\7\o\1\5\4\i\7\a\7\3\z\c\6\0\p\o\k\l\1\i\h\o\3\d\r\i\c\k\l\q\9\2\c\w\i\5\z\p\1\z\l\e\r\f\o\l\3\t\b\x\9\i\j\3\m\i\v\4\2\v\s\4\l\i\7\o\5\a\q\6\9\t\k\j\g\r\a\t\o\8\4\j\m\4\6\u\0\r\c\g\z\x\p\d\1\w\s\9\d\f\b\e\x\d\h\x\b\u\l\v\9\w\6\1\c\4\8\7\g\d\e\b\d\w\v\1\x\7\j\4\z\6\3\m\i\n\l\d\g\m\x\l\3\v\v\l\p\l\g\1\t\j\c\1\5\z\z\7\e\a\d\c\g\m\o\u\r\v\6\a\q\2\c\t\q\9\2\l\3\u\p\n\3\5\a\o\d\u\r\w\1\7\l\n\j\0\h\7\4\b\w\n\4\m\8\q\d\d\o\0\n\t\9\t\q\o\g\e\a\2\h\k\5\a\p\9\2\i\7\w\d\f\e\6\3\k\j\8\w\4\d\t\2\3\j\a\z\c\r\k\3\1\5\s\h\o\y\n\k\z\k\m\p\9\5\f\m\l\w\e\0\t\o\m\5\7\t\v\i\a\3\w\1\a\j\y\q\d\u\h\z\b\l\9\z\f\9\c\y\9\n\x\k\s\0\9\e\z\j\9\4\h\0\z\i\k\h\3\0\5\p\g\y\5\k\x\6\4\y\a\g\2\y\n\p\z\a\a\u\u\n\y\p\5\4\b\0\5\6\z\p\u\a\g\s\1\v\n\x\0\2\o\z\s\n\f\l\i\8\k\0\l\i\4\6\2\x\b\4\2\p\i\d\u\b\u\i\j\0\u\x\x\9\d\l\m\i\a\c\p\a\8\h\5\1\2\h\2\t\a\2\9\i\e\2\0\2\3\q\e\w\m\v\z\1\u\9\l\h\m\b\p\2\c\y\t\m\d\h\1\h\9\9\m\6\3\h\j\u\o\y\s\8\w\7\6\i\y\z\d\e\2\u\u\x\7\s\1\s\d\3\0\n\a\f\5\q\t\p\b\7\q\z\b\i\j\e\y ]] 00:39:32.365 00:39:32.365 real 0m7.041s 00:39:32.365 user 0m6.077s 00:39:32.365 sys 0m0.634s 00:39:32.365 02:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:32.365 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:39:32.365 ************************************ 00:39:32.365 END TEST dd_flag_nofollow 00:39:32.365 ************************************ 00:39:32.365 02:09:32 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:39:32.365 02:09:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:32.365 02:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:32.365 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:39:32.365 ************************************ 00:39:32.365 START TEST dd_flag_noatime 00:39:32.365 ************************************ 00:39:32.365 02:09:32 -- common/autotest_common.sh@1111 -- # noatime 00:39:32.366 02:09:32 -- dd/posix.sh@53 -- # local atime_if 00:39:32.366 02:09:32 -- dd/posix.sh@54 -- # local atime_of 00:39:32.366 02:09:32 -- dd/posix.sh@58 -- # gen_bytes 512 00:39:32.366 02:09:32 -- dd/common.sh@98 -- # xtrace_disable 00:39:32.366 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:39:32.366 02:09:32 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:32.366 02:09:32 -- dd/posix.sh@60 -- # atime_if=1713924570 00:39:32.366 02:09:32 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:32.366 02:09:32 -- dd/posix.sh@61 -- # atime_of=1713924572 00:39:32.366 02:09:32 -- dd/posix.sh@66 -- # sleep 1 00:39:33.298 02:09:33 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:33.556 [2024-04-24 02:09:33.382500] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:33.556 [2024-04-24 02:09:33.382791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143991 ] 00:39:33.556 [2024-04-24 02:09:33.569949] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.814 [2024-04-24 02:09:33.839620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.754  Copying: 512/512 [B] (average 500 kBps) 00:39:35.754 00:39:35.754 02:09:35 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:35.754 02:09:35 -- dd/posix.sh@69 -- # (( atime_if == 1713924570 )) 00:39:35.754 02:09:35 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:35.754 02:09:35 -- dd/posix.sh@70 -- # (( atime_of == 1713924572 )) 00:39:35.754 02:09:35 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:35.754 [2024-04-24 02:09:35.744917] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:35.754 [2024-04-24 02:09:35.745091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ] 00:39:36.012 [2024-04-24 02:09:35.925475] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.270 [2024-04-24 02:09:36.172834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.430  Copying: 512/512 [B] (average 500 kBps) 00:39:38.430 00:39:38.430 02:09:38 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:38.430 02:09:38 -- dd/posix.sh@73 -- # (( atime_if < 1713924576 )) 00:39:38.430 00:39:38.430 real 0m5.808s 00:39:38.430 user 0m4.020s 00:39:38.430 sys 0m0.527s 00:39:38.430 02:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:38.430 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:39:38.430 ************************************ 00:39:38.430 END TEST dd_flag_noatime 00:39:38.430 ************************************ 00:39:38.430 02:09:38 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:39:38.430 02:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:38.430 02:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:38.430 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:39:38.430 ************************************ 00:39:38.430 START TEST dd_flags_misc 00:39:38.430 ************************************ 00:39:38.430 02:09:38 -- common/autotest_common.sh@1111 -- # io 00:39:38.430 02:09:38 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:39:38.430 02:09:38 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:39:38.430 02:09:38 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:39:38.430 02:09:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:39:38.430 02:09:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:39:38.430 02:09:38 -- dd/common.sh@98 -- # xtrace_disable 00:39:38.430 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:39:38.430 02:09:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:38.430 02:09:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:39:38.431 [2024-04-24 02:09:38.277406] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:38.431 [2024-04-24 02:09:38.277604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144079 ] 00:39:38.431 [2024-04-24 02:09:38.461110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.688 [2024-04-24 02:09:38.769285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.155  Copying: 512/512 [B] (average 500 kBps) 00:39:41.155 00:39:41.155 02:09:40 -- dd/posix.sh@93 -- # [[ parlwocjfdel66jskq2agcgb3pxdtsy75dh88vb3q0mcd5ese6xjizaxs1meh5hhe7p3rfduon3wwr27ez7xwh0owsruo04igkoaic41iwx4ifeni9en8rrtzsc633njqf1858koo0baa1v1880d31v8a0xjpfzbxzho8b2reijpjzrr8vkd4lagl86pmlkty5upgwydxve6t6wpszzhfcs4h83in5mwhczy51rhq82tac713swzpflw6dx8t5ff14m5jy8m32se77lbjvyjzvc24kcg14fgqdfh1vgc8u636xfzzz3jxr4ag116qzflswrv624n6s0iuj8dbg3x2eidc7jnh45k3sud0439zismfrvgw3elfwndecv6l2x3f6ielvq9oqu68204drzj6ahluck6brxtao686vfv915s4utdz5xkf4whrdpm5nz58sso8faayud88nfvyynz9idozqalyntfq2qjdtuw079xjc8amnh5vgv87t34ti9v == \p\a\r\l\w\o\c\j\f\d\e\l\6\6\j\s\k\q\2\a\g\c\g\b\3\p\x\d\t\s\y\7\5\d\h\8\8\v\b\3\q\0\m\c\d\5\e\s\e\6\x\j\i\z\a\x\s\1\m\e\h\5\h\h\e\7\p\3\r\f\d\u\o\n\3\w\w\r\2\7\e\z\7\x\w\h\0\o\w\s\r\u\o\0\4\i\g\k\o\a\i\c\4\1\i\w\x\4\i\f\e\n\i\9\e\n\8\r\r\t\z\s\c\6\3\3\n\j\q\f\1\8\5\8\k\o\o\0\b\a\a\1\v\1\8\8\0\d\3\1\v\8\a\0\x\j\p\f\z\b\x\z\h\o\8\b\2\r\e\i\j\p\j\z\r\r\8\v\k\d\4\l\a\g\l\8\6\p\m\l\k\t\y\5\u\p\g\w\y\d\x\v\e\6\t\6\w\p\s\z\z\h\f\c\s\4\h\8\3\i\n\5\m\w\h\c\z\y\5\1\r\h\q\8\2\t\a\c\7\1\3\s\w\z\p\f\l\w\6\d\x\8\t\5\f\f\1\4\m\5\j\y\8\m\3\2\s\e\7\7\l\b\j\v\y\j\z\v\c\2\4\k\c\g\1\4\f\g\q\d\f\h\1\v\g\c\8\u\6\3\6\x\f\z\z\z\3\j\x\r\4\a\g\1\1\6\q\z\f\l\s\w\r\v\6\2\4\n\6\s\0\i\u\j\8\d\b\g\3\x\2\e\i\d\c\7\j\n\h\4\5\k\3\s\u\d\0\4\3\9\z\i\s\m\f\r\v\g\w\3\e\l\f\w\n\d\e\c\v\6\l\2\x\3\f\6\i\e\l\v\q\9\o\q\u\6\8\2\0\4\d\r\z\j\6\a\h\l\u\c\k\6\b\r\x\t\a\o\6\8\6\v\f\v\9\1\5\s\4\u\t\d\z\5\x\k\f\4\w\h\r\d\p\m\5\n\z\5\8\s\s\o\8\f\a\a\y\u\d\8\8\n\f\v\y\y\n\z\9\i\d\o\z\q\a\l\y\n\t\f\q\2\q\j\d\t\u\w\0\7\9\x\j\c\8\a\m\n\h\5\v\g\v\8\7\t\3\4\t\i\9\v ]] 00:39:41.155 02:09:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:41.155 02:09:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:39:41.155 [2024-04-24 02:09:40.820048] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:41.155 [2024-04-24 02:09:40.820291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144112 ] 00:39:41.155 [2024-04-24 02:09:40.995768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.155 [2024-04-24 02:09:41.223539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.097  Copying: 512/512 [B] (average 500 kBps) 00:39:43.097 00:39:43.097 02:09:43 -- dd/posix.sh@93 -- # [[ parlwocjfdel66jskq2agcgb3pxdtsy75dh88vb3q0mcd5ese6xjizaxs1meh5hhe7p3rfduon3wwr27ez7xwh0owsruo04igkoaic41iwx4ifeni9en8rrtzsc633njqf1858koo0baa1v1880d31v8a0xjpfzbxzho8b2reijpjzrr8vkd4lagl86pmlkty5upgwydxve6t6wpszzhfcs4h83in5mwhczy51rhq82tac713swzpflw6dx8t5ff14m5jy8m32se77lbjvyjzvc24kcg14fgqdfh1vgc8u636xfzzz3jxr4ag116qzflswrv624n6s0iuj8dbg3x2eidc7jnh45k3sud0439zismfrvgw3elfwndecv6l2x3f6ielvq9oqu68204drzj6ahluck6brxtao686vfv915s4utdz5xkf4whrdpm5nz58sso8faayud88nfvyynz9idozqalyntfq2qjdtuw079xjc8amnh5vgv87t34ti9v == \p\a\r\l\w\o\c\j\f\d\e\l\6\6\j\s\k\q\2\a\g\c\g\b\3\p\x\d\t\s\y\7\5\d\h\8\8\v\b\3\q\0\m\c\d\5\e\s\e\6\x\j\i\z\a\x\s\1\m\e\h\5\h\h\e\7\p\3\r\f\d\u\o\n\3\w\w\r\2\7\e\z\7\x\w\h\0\o\w\s\r\u\o\0\4\i\g\k\o\a\i\c\4\1\i\w\x\4\i\f\e\n\i\9\e\n\8\r\r\t\z\s\c\6\3\3\n\j\q\f\1\8\5\8\k\o\o\0\b\a\a\1\v\1\8\8\0\d\3\1\v\8\a\0\x\j\p\f\z\b\x\z\h\o\8\b\2\r\e\i\j\p\j\z\r\r\8\v\k\d\4\l\a\g\l\8\6\p\m\l\k\t\y\5\u\p\g\w\y\d\x\v\e\6\t\6\w\p\s\z\z\h\f\c\s\4\h\8\3\i\n\5\m\w\h\c\z\y\5\1\r\h\q\8\2\t\a\c\7\1\3\s\w\z\p\f\l\w\6\d\x\8\t\5\f\f\1\4\m\5\j\y\8\m\3\2\s\e\7\7\l\b\j\v\y\j\z\v\c\2\4\k\c\g\1\4\f\g\q\d\f\h\1\v\g\c\8\u\6\3\6\x\f\z\z\z\3\j\x\r\4\a\g\1\1\6\q\z\f\l\s\w\r\v\6\2\4\n\6\s\0\i\u\j\8\d\b\g\3\x\2\e\i\d\c\7\j\n\h\4\5\k\3\s\u\d\0\4\3\9\z\i\s\m\f\r\v\g\w\3\e\l\f\w\n\d\e\c\v\6\l\2\x\3\f\6\i\e\l\v\q\9\o\q\u\6\8\2\0\4\d\r\z\j\6\a\h\l\u\c\k\6\b\r\x\t\a\o\6\8\6\v\f\v\9\1\5\s\4\u\t\d\z\5\x\k\f\4\w\h\r\d\p\m\5\n\z\5\8\s\s\o\8\f\a\a\y\u\d\8\8\n\f\v\y\y\n\z\9\i\d\o\z\q\a\l\y\n\t\f\q\2\q\j\d\t\u\w\0\7\9\x\j\c\8\a\m\n\h\5\v\g\v\8\7\t\3\4\t\i\9\v ]] 00:39:43.097 02:09:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:43.097 02:09:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:39:43.097 [2024-04-24 02:09:43.110618] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:43.097 [2024-04-24 02:09:43.110856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144141 ] 00:39:43.356 [2024-04-24 02:09:43.285247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.614 [2024-04-24 02:09:43.513644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.775  Copying: 512/512 [B] (average 250 kBps) 00:39:45.775 00:39:45.775 02:09:45 -- dd/posix.sh@93 -- # [[ parlwocjfdel66jskq2agcgb3pxdtsy75dh88vb3q0mcd5ese6xjizaxs1meh5hhe7p3rfduon3wwr27ez7xwh0owsruo04igkoaic41iwx4ifeni9en8rrtzsc633njqf1858koo0baa1v1880d31v8a0xjpfzbxzho8b2reijpjzrr8vkd4lagl86pmlkty5upgwydxve6t6wpszzhfcs4h83in5mwhczy51rhq82tac713swzpflw6dx8t5ff14m5jy8m32se77lbjvyjzvc24kcg14fgqdfh1vgc8u636xfzzz3jxr4ag116qzflswrv624n6s0iuj8dbg3x2eidc7jnh45k3sud0439zismfrvgw3elfwndecv6l2x3f6ielvq9oqu68204drzj6ahluck6brxtao686vfv915s4utdz5xkf4whrdpm5nz58sso8faayud88nfvyynz9idozqalyntfq2qjdtuw079xjc8amnh5vgv87t34ti9v == \p\a\r\l\w\o\c\j\f\d\e\l\6\6\j\s\k\q\2\a\g\c\g\b\3\p\x\d\t\s\y\7\5\d\h\8\8\v\b\3\q\0\m\c\d\5\e\s\e\6\x\j\i\z\a\x\s\1\m\e\h\5\h\h\e\7\p\3\r\f\d\u\o\n\3\w\w\r\2\7\e\z\7\x\w\h\0\o\w\s\r\u\o\0\4\i\g\k\o\a\i\c\4\1\i\w\x\4\i\f\e\n\i\9\e\n\8\r\r\t\z\s\c\6\3\3\n\j\q\f\1\8\5\8\k\o\o\0\b\a\a\1\v\1\8\8\0\d\3\1\v\8\a\0\x\j\p\f\z\b\x\z\h\o\8\b\2\r\e\i\j\p\j\z\r\r\8\v\k\d\4\l\a\g\l\8\6\p\m\l\k\t\y\5\u\p\g\w\y\d\x\v\e\6\t\6\w\p\s\z\z\h\f\c\s\4\h\8\3\i\n\5\m\w\h\c\z\y\5\1\r\h\q\8\2\t\a\c\7\1\3\s\w\z\p\f\l\w\6\d\x\8\t\5\f\f\1\4\m\5\j\y\8\m\3\2\s\e\7\7\l\b\j\v\y\j\z\v\c\2\4\k\c\g\1\4\f\g\q\d\f\h\1\v\g\c\8\u\6\3\6\x\f\z\z\z\3\j\x\r\4\a\g\1\1\6\q\z\f\l\s\w\r\v\6\2\4\n\6\s\0\i\u\j\8\d\b\g\3\x\2\e\i\d\c\7\j\n\h\4\5\k\3\s\u\d\0\4\3\9\z\i\s\m\f\r\v\g\w\3\e\l\f\w\n\d\e\c\v\6\l\2\x\3\f\6\i\e\l\v\q\9\o\q\u\6\8\2\0\4\d\r\z\j\6\a\h\l\u\c\k\6\b\r\x\t\a\o\6\8\6\v\f\v\9\1\5\s\4\u\t\d\z\5\x\k\f\4\w\h\r\d\p\m\5\n\z\5\8\s\s\o\8\f\a\a\y\u\d\8\8\n\f\v\y\y\n\z\9\i\d\o\z\q\a\l\y\n\t\f\q\2\q\j\d\t\u\w\0\7\9\x\j\c\8\a\m\n\h\5\v\g\v\8\7\t\3\4\t\i\9\v ]] 00:39:45.775 02:09:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:45.775 02:09:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:39:45.775 [2024-04-24 02:09:45.519209] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:45.775 [2024-04-24 02:09:45.519416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144177 ] 00:39:45.775 [2024-04-24 02:09:45.696091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.033 [2024-04-24 02:09:45.938782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.272  Copying: 512/512 [B] (average 166 kBps) 00:39:48.272 00:39:48.272 02:09:47 -- dd/posix.sh@93 -- # [[ parlwocjfdel66jskq2agcgb3pxdtsy75dh88vb3q0mcd5ese6xjizaxs1meh5hhe7p3rfduon3wwr27ez7xwh0owsruo04igkoaic41iwx4ifeni9en8rrtzsc633njqf1858koo0baa1v1880d31v8a0xjpfzbxzho8b2reijpjzrr8vkd4lagl86pmlkty5upgwydxve6t6wpszzhfcs4h83in5mwhczy51rhq82tac713swzpflw6dx8t5ff14m5jy8m32se77lbjvyjzvc24kcg14fgqdfh1vgc8u636xfzzz3jxr4ag116qzflswrv624n6s0iuj8dbg3x2eidc7jnh45k3sud0439zismfrvgw3elfwndecv6l2x3f6ielvq9oqu68204drzj6ahluck6brxtao686vfv915s4utdz5xkf4whrdpm5nz58sso8faayud88nfvyynz9idozqalyntfq2qjdtuw079xjc8amnh5vgv87t34ti9v == \p\a\r\l\w\o\c\j\f\d\e\l\6\6\j\s\k\q\2\a\g\c\g\b\3\p\x\d\t\s\y\7\5\d\h\8\8\v\b\3\q\0\m\c\d\5\e\s\e\6\x\j\i\z\a\x\s\1\m\e\h\5\h\h\e\7\p\3\r\f\d\u\o\n\3\w\w\r\2\7\e\z\7\x\w\h\0\o\w\s\r\u\o\0\4\i\g\k\o\a\i\c\4\1\i\w\x\4\i\f\e\n\i\9\e\n\8\r\r\t\z\s\c\6\3\3\n\j\q\f\1\8\5\8\k\o\o\0\b\a\a\1\v\1\8\8\0\d\3\1\v\8\a\0\x\j\p\f\z\b\x\z\h\o\8\b\2\r\e\i\j\p\j\z\r\r\8\v\k\d\4\l\a\g\l\8\6\p\m\l\k\t\y\5\u\p\g\w\y\d\x\v\e\6\t\6\w\p\s\z\z\h\f\c\s\4\h\8\3\i\n\5\m\w\h\c\z\y\5\1\r\h\q\8\2\t\a\c\7\1\3\s\w\z\p\f\l\w\6\d\x\8\t\5\f\f\1\4\m\5\j\y\8\m\3\2\s\e\7\7\l\b\j\v\y\j\z\v\c\2\4\k\c\g\1\4\f\g\q\d\f\h\1\v\g\c\8\u\6\3\6\x\f\z\z\z\3\j\x\r\4\a\g\1\1\6\q\z\f\l\s\w\r\v\6\2\4\n\6\s\0\i\u\j\8\d\b\g\3\x\2\e\i\d\c\7\j\n\h\4\5\k\3\s\u\d\0\4\3\9\z\i\s\m\f\r\v\g\w\3\e\l\f\w\n\d\e\c\v\6\l\2\x\3\f\6\i\e\l\v\q\9\o\q\u\6\8\2\0\4\d\r\z\j\6\a\h\l\u\c\k\6\b\r\x\t\a\o\6\8\6\v\f\v\9\1\5\s\4\u\t\d\z\5\x\k\f\4\w\h\r\d\p\m\5\n\z\5\8\s\s\o\8\f\a\a\y\u\d\8\8\n\f\v\y\y\n\z\9\i\d\o\z\q\a\l\y\n\t\f\q\2\q\j\d\t\u\w\0\7\9\x\j\c\8\a\m\n\h\5\v\g\v\8\7\t\3\4\t\i\9\v ]] 00:39:48.272 02:09:47 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:39:48.272 02:09:47 -- dd/posix.sh@86 -- # gen_bytes 512 00:39:48.272 02:09:47 -- dd/common.sh@98 -- # xtrace_disable 00:39:48.272 02:09:47 -- common/autotest_common.sh@10 -- # set +x 00:39:48.272 02:09:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:48.272 02:09:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:39:48.272 [2024-04-24 02:09:47.943537] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:48.272 [2024-04-24 02:09:47.944338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144205 ] 00:39:48.272 [2024-04-24 02:09:48.121743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.272 [2024-04-24 02:09:48.350317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.229  Copying: 512/512 [B] (average 500 kBps) 00:39:50.229 00:39:50.230 02:09:50 -- dd/posix.sh@93 -- # [[ todkrzdu3ecoi6p70rt4zupcom1kb6x32iosjk2j5qm9bdf4rah3mtbv6nh2wm741qi38t2b2lxzh3bc1bpu45mmps1kel0wrweypk5n87ywf99if2lgfvvl5tb974qlrhy14jhq996d58ka16cq0m5ib75wv8a4zz46i09vs09pxnojf95tr69kbzv8goj028f88y06whv38uxcqvlxtaa1jkd76au2h1i9suubn42cu3n5zcuayakz02og8ru9l9zoulodn2cle58jq42g3zy1ywrebjkffyllfto7tjhdpohakvqsmx18wqb3antoltjcd5yf4oo1vmw7qhuoqfolshmw488ljlk547qla146dg2vvqaw33vdv2r0z3zc5558ewo8i4k1vvjykwmcvuzzjfyaf1v89nnb67jvt3zcbjrbqanucz9tux2421fnxwalwddwlpzso4kcgm7kb4sauxmq5r904lix99e2iz4egri2ca3km6wuhlkuaga8 == \t\o\d\k\r\z\d\u\3\e\c\o\i\6\p\7\0\r\t\4\z\u\p\c\o\m\1\k\b\6\x\3\2\i\o\s\j\k\2\j\5\q\m\9\b\d\f\4\r\a\h\3\m\t\b\v\6\n\h\2\w\m\7\4\1\q\i\3\8\t\2\b\2\l\x\z\h\3\b\c\1\b\p\u\4\5\m\m\p\s\1\k\e\l\0\w\r\w\e\y\p\k\5\n\8\7\y\w\f\9\9\i\f\2\l\g\f\v\v\l\5\t\b\9\7\4\q\l\r\h\y\1\4\j\h\q\9\9\6\d\5\8\k\a\1\6\c\q\0\m\5\i\b\7\5\w\v\8\a\4\z\z\4\6\i\0\9\v\s\0\9\p\x\n\o\j\f\9\5\t\r\6\9\k\b\z\v\8\g\o\j\0\2\8\f\8\8\y\0\6\w\h\v\3\8\u\x\c\q\v\l\x\t\a\a\1\j\k\d\7\6\a\u\2\h\1\i\9\s\u\u\b\n\4\2\c\u\3\n\5\z\c\u\a\y\a\k\z\0\2\o\g\8\r\u\9\l\9\z\o\u\l\o\d\n\2\c\l\e\5\8\j\q\4\2\g\3\z\y\1\y\w\r\e\b\j\k\f\f\y\l\l\f\t\o\7\t\j\h\d\p\o\h\a\k\v\q\s\m\x\1\8\w\q\b\3\a\n\t\o\l\t\j\c\d\5\y\f\4\o\o\1\v\m\w\7\q\h\u\o\q\f\o\l\s\h\m\w\4\8\8\l\j\l\k\5\4\7\q\l\a\1\4\6\d\g\2\v\v\q\a\w\3\3\v\d\v\2\r\0\z\3\z\c\5\5\5\8\e\w\o\8\i\4\k\1\v\v\j\y\k\w\m\c\v\u\z\z\j\f\y\a\f\1\v\8\9\n\n\b\6\7\j\v\t\3\z\c\b\j\r\b\q\a\n\u\c\z\9\t\u\x\2\4\2\1\f\n\x\w\a\l\w\d\d\w\l\p\z\s\o\4\k\c\g\m\7\k\b\4\s\a\u\x\m\q\5\r\9\0\4\l\i\x\9\9\e\2\i\z\4\e\g\r\i\2\c\a\3\k\m\6\w\u\h\l\k\u\a\g\a\8 ]] 00:39:50.230 02:09:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:50.230 02:09:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:39:50.230 [2024-04-24 02:09:50.303245] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:50.230 [2024-04-24 02:09:50.303437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144238 ] 00:39:50.488 [2024-04-24 02:09:50.482920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.746 [2024-04-24 02:09:50.784523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.693  Copying: 512/512 [B] (average 500 kBps) 00:39:52.693 00:39:52.693 02:09:52 -- dd/posix.sh@93 -- # [[ todkrzdu3ecoi6p70rt4zupcom1kb6x32iosjk2j5qm9bdf4rah3mtbv6nh2wm741qi38t2b2lxzh3bc1bpu45mmps1kel0wrweypk5n87ywf99if2lgfvvl5tb974qlrhy14jhq996d58ka16cq0m5ib75wv8a4zz46i09vs09pxnojf95tr69kbzv8goj028f88y06whv38uxcqvlxtaa1jkd76au2h1i9suubn42cu3n5zcuayakz02og8ru9l9zoulodn2cle58jq42g3zy1ywrebjkffyllfto7tjhdpohakvqsmx18wqb3antoltjcd5yf4oo1vmw7qhuoqfolshmw488ljlk547qla146dg2vvqaw33vdv2r0z3zc5558ewo8i4k1vvjykwmcvuzzjfyaf1v89nnb67jvt3zcbjrbqanucz9tux2421fnxwalwddwlpzso4kcgm7kb4sauxmq5r904lix99e2iz4egri2ca3km6wuhlkuaga8 == \t\o\d\k\r\z\d\u\3\e\c\o\i\6\p\7\0\r\t\4\z\u\p\c\o\m\1\k\b\6\x\3\2\i\o\s\j\k\2\j\5\q\m\9\b\d\f\4\r\a\h\3\m\t\b\v\6\n\h\2\w\m\7\4\1\q\i\3\8\t\2\b\2\l\x\z\h\3\b\c\1\b\p\u\4\5\m\m\p\s\1\k\e\l\0\w\r\w\e\y\p\k\5\n\8\7\y\w\f\9\9\i\f\2\l\g\f\v\v\l\5\t\b\9\7\4\q\l\r\h\y\1\4\j\h\q\9\9\6\d\5\8\k\a\1\6\c\q\0\m\5\i\b\7\5\w\v\8\a\4\z\z\4\6\i\0\9\v\s\0\9\p\x\n\o\j\f\9\5\t\r\6\9\k\b\z\v\8\g\o\j\0\2\8\f\8\8\y\0\6\w\h\v\3\8\u\x\c\q\v\l\x\t\a\a\1\j\k\d\7\6\a\u\2\h\1\i\9\s\u\u\b\n\4\2\c\u\3\n\5\z\c\u\a\y\a\k\z\0\2\o\g\8\r\u\9\l\9\z\o\u\l\o\d\n\2\c\l\e\5\8\j\q\4\2\g\3\z\y\1\y\w\r\e\b\j\k\f\f\y\l\l\f\t\o\7\t\j\h\d\p\o\h\a\k\v\q\s\m\x\1\8\w\q\b\3\a\n\t\o\l\t\j\c\d\5\y\f\4\o\o\1\v\m\w\7\q\h\u\o\q\f\o\l\s\h\m\w\4\8\8\l\j\l\k\5\4\7\q\l\a\1\4\6\d\g\2\v\v\q\a\w\3\3\v\d\v\2\r\0\z\3\z\c\5\5\5\8\e\w\o\8\i\4\k\1\v\v\j\y\k\w\m\c\v\u\z\z\j\f\y\a\f\1\v\8\9\n\n\b\6\7\j\v\t\3\z\c\b\j\r\b\q\a\n\u\c\z\9\t\u\x\2\4\2\1\f\n\x\w\a\l\w\d\d\w\l\p\z\s\o\4\k\c\g\m\7\k\b\4\s\a\u\x\m\q\5\r\9\0\4\l\i\x\9\9\e\2\i\z\4\e\g\r\i\2\c\a\3\k\m\6\w\u\h\l\k\u\a\g\a\8 ]] 00:39:52.693 02:09:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:52.693 02:09:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:39:52.693 [2024-04-24 02:09:52.698828] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:52.693 [2024-04-24 02:09:52.699645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144268 ] 00:39:52.951 [2024-04-24 02:09:52.880800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.209 [2024-04-24 02:09:53.115558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.381  Copying: 512/512 [B] (average 125 kBps) 00:39:55.381 00:39:55.381 02:09:55 -- dd/posix.sh@93 -- # [[ todkrzdu3ecoi6p70rt4zupcom1kb6x32iosjk2j5qm9bdf4rah3mtbv6nh2wm741qi38t2b2lxzh3bc1bpu45mmps1kel0wrweypk5n87ywf99if2lgfvvl5tb974qlrhy14jhq996d58ka16cq0m5ib75wv8a4zz46i09vs09pxnojf95tr69kbzv8goj028f88y06whv38uxcqvlxtaa1jkd76au2h1i9suubn42cu3n5zcuayakz02og8ru9l9zoulodn2cle58jq42g3zy1ywrebjkffyllfto7tjhdpohakvqsmx18wqb3antoltjcd5yf4oo1vmw7qhuoqfolshmw488ljlk547qla146dg2vvqaw33vdv2r0z3zc5558ewo8i4k1vvjykwmcvuzzjfyaf1v89nnb67jvt3zcbjrbqanucz9tux2421fnxwalwddwlpzso4kcgm7kb4sauxmq5r904lix99e2iz4egri2ca3km6wuhlkuaga8 == \t\o\d\k\r\z\d\u\3\e\c\o\i\6\p\7\0\r\t\4\z\u\p\c\o\m\1\k\b\6\x\3\2\i\o\s\j\k\2\j\5\q\m\9\b\d\f\4\r\a\h\3\m\t\b\v\6\n\h\2\w\m\7\4\1\q\i\3\8\t\2\b\2\l\x\z\h\3\b\c\1\b\p\u\4\5\m\m\p\s\1\k\e\l\0\w\r\w\e\y\p\k\5\n\8\7\y\w\f\9\9\i\f\2\l\g\f\v\v\l\5\t\b\9\7\4\q\l\r\h\y\1\4\j\h\q\9\9\6\d\5\8\k\a\1\6\c\q\0\m\5\i\b\7\5\w\v\8\a\4\z\z\4\6\i\0\9\v\s\0\9\p\x\n\o\j\f\9\5\t\r\6\9\k\b\z\v\8\g\o\j\0\2\8\f\8\8\y\0\6\w\h\v\3\8\u\x\c\q\v\l\x\t\a\a\1\j\k\d\7\6\a\u\2\h\1\i\9\s\u\u\b\n\4\2\c\u\3\n\5\z\c\u\a\y\a\k\z\0\2\o\g\8\r\u\9\l\9\z\o\u\l\o\d\n\2\c\l\e\5\8\j\q\4\2\g\3\z\y\1\y\w\r\e\b\j\k\f\f\y\l\l\f\t\o\7\t\j\h\d\p\o\h\a\k\v\q\s\m\x\1\8\w\q\b\3\a\n\t\o\l\t\j\c\d\5\y\f\4\o\o\1\v\m\w\7\q\h\u\o\q\f\o\l\s\h\m\w\4\8\8\l\j\l\k\5\4\7\q\l\a\1\4\6\d\g\2\v\v\q\a\w\3\3\v\d\v\2\r\0\z\3\z\c\5\5\5\8\e\w\o\8\i\4\k\1\v\v\j\y\k\w\m\c\v\u\z\z\j\f\y\a\f\1\v\8\9\n\n\b\6\7\j\v\t\3\z\c\b\j\r\b\q\a\n\u\c\z\9\t\u\x\2\4\2\1\f\n\x\w\a\l\w\d\d\w\l\p\z\s\o\4\k\c\g\m\7\k\b\4\s\a\u\x\m\q\5\r\9\0\4\l\i\x\9\9\e\2\i\z\4\e\g\r\i\2\c\a\3\k\m\6\w\u\h\l\k\u\a\g\a\8 ]] 00:39:55.381 02:09:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:55.381 02:09:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:39:55.381 [2024-04-24 02:09:55.130181] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:55.381 [2024-04-24 02:09:55.130348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144297 ] 00:39:55.381 [2024-04-24 02:09:55.295286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.639 [2024-04-24 02:09:55.547393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.804  Copying: 512/512 [B] (average 500 kBps) 00:39:57.804 00:39:57.804 ************************************ 00:39:57.804 END TEST dd_flags_misc 00:39:57.804 ************************************ 00:39:57.804 02:09:57 -- dd/posix.sh@93 -- # [[ todkrzdu3ecoi6p70rt4zupcom1kb6x32iosjk2j5qm9bdf4rah3mtbv6nh2wm741qi38t2b2lxzh3bc1bpu45mmps1kel0wrweypk5n87ywf99if2lgfvvl5tb974qlrhy14jhq996d58ka16cq0m5ib75wv8a4zz46i09vs09pxnojf95tr69kbzv8goj028f88y06whv38uxcqvlxtaa1jkd76au2h1i9suubn42cu3n5zcuayakz02og8ru9l9zoulodn2cle58jq42g3zy1ywrebjkffyllfto7tjhdpohakvqsmx18wqb3antoltjcd5yf4oo1vmw7qhuoqfolshmw488ljlk547qla146dg2vvqaw33vdv2r0z3zc5558ewo8i4k1vvjykwmcvuzzjfyaf1v89nnb67jvt3zcbjrbqanucz9tux2421fnxwalwddwlpzso4kcgm7kb4sauxmq5r904lix99e2iz4egri2ca3km6wuhlkuaga8 == \t\o\d\k\r\z\d\u\3\e\c\o\i\6\p\7\0\r\t\4\z\u\p\c\o\m\1\k\b\6\x\3\2\i\o\s\j\k\2\j\5\q\m\9\b\d\f\4\r\a\h\3\m\t\b\v\6\n\h\2\w\m\7\4\1\q\i\3\8\t\2\b\2\l\x\z\h\3\b\c\1\b\p\u\4\5\m\m\p\s\1\k\e\l\0\w\r\w\e\y\p\k\5\n\8\7\y\w\f\9\9\i\f\2\l\g\f\v\v\l\5\t\b\9\7\4\q\l\r\h\y\1\4\j\h\q\9\9\6\d\5\8\k\a\1\6\c\q\0\m\5\i\b\7\5\w\v\8\a\4\z\z\4\6\i\0\9\v\s\0\9\p\x\n\o\j\f\9\5\t\r\6\9\k\b\z\v\8\g\o\j\0\2\8\f\8\8\y\0\6\w\h\v\3\8\u\x\c\q\v\l\x\t\a\a\1\j\k\d\7\6\a\u\2\h\1\i\9\s\u\u\b\n\4\2\c\u\3\n\5\z\c\u\a\y\a\k\z\0\2\o\g\8\r\u\9\l\9\z\o\u\l\o\d\n\2\c\l\e\5\8\j\q\4\2\g\3\z\y\1\y\w\r\e\b\j\k\f\f\y\l\l\f\t\o\7\t\j\h\d\p\o\h\a\k\v\q\s\m\x\1\8\w\q\b\3\a\n\t\o\l\t\j\c\d\5\y\f\4\o\o\1\v\m\w\7\q\h\u\o\q\f\o\l\s\h\m\w\4\8\8\l\j\l\k\5\4\7\q\l\a\1\4\6\d\g\2\v\v\q\a\w\3\3\v\d\v\2\r\0\z\3\z\c\5\5\5\8\e\w\o\8\i\4\k\1\v\v\j\y\k\w\m\c\v\u\z\z\j\f\y\a\f\1\v\8\9\n\n\b\6\7\j\v\t\3\z\c\b\j\r\b\q\a\n\u\c\z\9\t\u\x\2\4\2\1\f\n\x\w\a\l\w\d\d\w\l\p\z\s\o\4\k\c\g\m\7\k\b\4\s\a\u\x\m\q\5\r\9\0\4\l\i\x\9\9\e\2\i\z\4\e\g\r\i\2\c\a\3\k\m\6\w\u\h\l\k\u\a\g\a\8 ]] 00:39:57.804 00:39:57.804 real 0m19.315s 00:39:57.804 user 0m16.388s 00:39:57.804 sys 0m1.854s 00:39:57.804 02:09:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:57.804 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:39:57.804 02:09:57 -- dd/posix.sh@131 -- # tests_forced_aio 00:39:57.804 02:09:57 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:39:57.804 * Second test run, using AIO 00:39:57.804 02:09:57 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:39:57.804 02:09:57 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:39:57.804 02:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:57.804 02:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:57.804 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:39:57.804 ************************************ 00:39:57.804 START TEST dd_flag_append_forced_aio 00:39:57.804 ************************************ 00:39:57.804 02:09:57 -- common/autotest_common.sh@1111 -- # append 00:39:57.804 02:09:57 -- dd/posix.sh@16 -- # local dump0 00:39:57.804 02:09:57 -- dd/posix.sh@17 -- # local dump1 00:39:57.804 02:09:57 -- dd/posix.sh@19 -- # gen_bytes 32 00:39:57.804 02:09:57 -- dd/common.sh@98 -- # xtrace_disable 00:39:57.804 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:39:57.804 02:09:57 -- dd/posix.sh@19 -- # dump0=hwb7mqjkbegs8bjbw4mwo75h18a62gb2 00:39:57.804 02:09:57 -- dd/posix.sh@20 -- # gen_bytes 32 00:39:57.804 02:09:57 -- dd/common.sh@98 -- # xtrace_disable 00:39:57.804 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:39:57.804 02:09:57 -- dd/posix.sh@20 -- # dump1=hxdldlygo4efnnu27oy2omhsj3bzyn1c 00:39:57.804 02:09:57 -- dd/posix.sh@22 -- # printf %s hwb7mqjkbegs8bjbw4mwo75h18a62gb2 00:39:57.804 02:09:57 -- dd/posix.sh@23 -- # printf %s hxdldlygo4efnnu27oy2omhsj3bzyn1c 00:39:57.804 02:09:57 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:39:57.804 [2024-04-24 02:09:57.709622] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:39:57.804 [2024-04-24 02:09:57.709910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144358 ] 00:39:58.062 [2024-04-24 02:09:57.912841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.321 [2024-04-24 02:09:58.208050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.481  Copying: 32/32 [B] (average 31 kBps) 00:40:00.481 00:40:00.481 02:10:00 -- dd/posix.sh@27 -- # [[ hxdldlygo4efnnu27oy2omhsj3bzyn1chwb7mqjkbegs8bjbw4mwo75h18a62gb2 == \h\x\d\l\d\l\y\g\o\4\e\f\n\n\u\2\7\o\y\2\o\m\h\s\j\3\b\z\y\n\1\c\h\w\b\7\m\q\j\k\b\e\g\s\8\b\j\b\w\4\m\w\o\7\5\h\1\8\a\6\2\g\b\2 ]] 00:40:00.481 00:40:00.481 real 0m2.627s 00:40:00.481 user 0m2.211s 00:40:00.481 sys 0m0.272s 00:40:00.481 ************************************ 00:40:00.481 END TEST dd_flag_append_forced_aio 00:40:00.481 ************************************ 00:40:00.481 02:10:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:00.481 02:10:00 -- common/autotest_common.sh@10 -- # set +x 00:40:00.481 02:10:00 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:40:00.481 02:10:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:00.481 02:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:00.481 02:10:00 -- common/autotest_common.sh@10 -- # set +x 00:40:00.481 ************************************ 00:40:00.481 START TEST dd_flag_directory_forced_aio 00:40:00.481 ************************************ 00:40:00.481 02:10:00 -- common/autotest_common.sh@1111 -- # directory 00:40:00.481 02:10:00 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:00.481 02:10:00 -- common/autotest_common.sh@638 -- # local es=0 00:40:00.481 02:10:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:00.481 02:10:00 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.481 02:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:00.481 02:10:00 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.481 02:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:00.481 02:10:00 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.481 02:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:00.481 02:10:00 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.481 02:10:00 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:00.481 02:10:00 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:00.481 [2024-04-24 02:10:00.407953] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:00.481 [2024-04-24 02:10:00.408173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144417 ] 00:40:00.739 [2024-04-24 02:10:00.587316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.997 [2024-04-24 02:10:00.829379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.255 [2024-04-24 02:10:01.238807] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:01.255 [2024-04-24 02:10:01.238891] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:01.255 [2024-04-24 02:10:01.238931] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:02.187 [2024-04-24 02:10:02.222514] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:02.753 02:10:02 -- common/autotest_common.sh@641 -- # es=236 00:40:02.753 02:10:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:40:02.753 02:10:02 -- common/autotest_common.sh@650 -- # es=108 00:40:02.753 02:10:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:40:02.753 02:10:02 -- common/autotest_common.sh@658 -- # es=1 00:40:02.753 02:10:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:40:02.753 02:10:02 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:02.753 02:10:02 -- common/autotest_common.sh@638 -- # local es=0 00:40:02.753 02:10:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:02.753 02:10:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:02.753 02:10:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:02.754 02:10:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:02.754 02:10:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:02.754 02:10:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:02.754 02:10:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:02.754 02:10:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:02.754 02:10:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:02.754 02:10:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:02.754 [2024-04-24 02:10:02.820892] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:02.754 [2024-04-24 02:10:02.821088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144457 ] 00:40:03.011 [2024-04-24 02:10:03.000878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.295 [2024-04-24 02:10:03.230149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.861 [2024-04-24 02:10:03.649683] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:03.861 [2024-04-24 02:10:03.649765] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:03.861 [2024-04-24 02:10:03.649795] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:04.794 [2024-04-24 02:10:04.662140] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:05.360 02:10:05 -- common/autotest_common.sh@641 -- # es=236 00:40:05.360 02:10:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:40:05.360 02:10:05 -- common/autotest_common.sh@650 -- # es=108 00:40:05.360 02:10:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:40:05.360 02:10:05 -- common/autotest_common.sh@658 -- # es=1 00:40:05.360 02:10:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:40:05.360 00:40:05.360 real 0m4.828s 00:40:05.360 user 0m4.178s 00:40:05.360 sys 0m0.450s 00:40:05.360 02:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:05.360 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:40:05.360 ************************************ 00:40:05.360 END TEST dd_flag_directory_forced_aio 00:40:05.360 ************************************ 00:40:05.360 02:10:05 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:40:05.360 02:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:05.360 02:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:05.360 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:40:05.360 ************************************ 00:40:05.360 START TEST dd_flag_nofollow_forced_aio 00:40:05.360 ************************************ 00:40:05.360 02:10:05 -- common/autotest_common.sh@1111 -- # nofollow 00:40:05.360 02:10:05 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:05.360 02:10:05 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:05.360 02:10:05 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:05.360 02:10:05 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:05.360 02:10:05 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:05.360 02:10:05 -- common/autotest_common.sh@638 -- # local es=0 00:40:05.360 02:10:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:05.360 02:10:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.360 02:10:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:05.360 02:10:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.360 02:10:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:05.360 02:10:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.360 02:10:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:05.360 02:10:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.360 02:10:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:05.360 02:10:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:05.360 [2024-04-24 02:10:05.337741] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:05.360 [2024-04-24 02:10:05.337931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144513 ] 00:40:05.619 [2024-04-24 02:10:05.516764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.878 [2024-04-24 02:10:05.756910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.138 [2024-04-24 02:10:06.171827] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:06.138 [2024-04-24 02:10:06.171928] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:06.138 [2024-04-24 02:10:06.171957] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:07.101 [2024-04-24 02:10:07.143824] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:07.667 02:10:07 -- common/autotest_common.sh@641 -- # es=216 00:40:07.667 02:10:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:40:07.667 02:10:07 -- common/autotest_common.sh@650 -- # es=88 00:40:07.667 02:10:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:40:07.667 02:10:07 -- common/autotest_common.sh@658 -- # es=1 00:40:07.667 02:10:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:40:07.667 02:10:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:07.667 02:10:07 -- common/autotest_common.sh@638 -- # local es=0 00:40:07.667 02:10:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:07.667 02:10:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.667 02:10:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:07.667 02:10:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.667 02:10:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:07.667 02:10:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.667 02:10:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:07.667 02:10:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.667 02:10:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:07.667 02:10:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:07.667 [2024-04-24 02:10:07.704292] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:07.667 [2024-04-24 02:10:07.704653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144551 ] 00:40:07.926 [2024-04-24 02:10:07.865490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.183 [2024-04-24 02:10:08.102216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.441 [2024-04-24 02:10:08.506451] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:08.441 [2024-04-24 02:10:08.506539] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:08.441 [2024-04-24 02:10:08.506568] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:09.816 [2024-04-24 02:10:09.484546] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:10.074 02:10:10 -- common/autotest_common.sh@641 -- # es=216 00:40:10.074 02:10:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:40:10.074 02:10:10 -- common/autotest_common.sh@650 -- # es=88 00:40:10.074 02:10:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:40:10.074 02:10:10 -- common/autotest_common.sh@658 -- # es=1 00:40:10.074 02:10:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:40:10.074 02:10:10 -- dd/posix.sh@46 -- # gen_bytes 512 00:40:10.074 02:10:10 -- dd/common.sh@98 -- # xtrace_disable 00:40:10.074 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:40:10.074 02:10:10 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:10.074 [2024-04-24 02:10:10.095070] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:10.074 [2024-04-24 02:10:10.095355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144576 ] 00:40:10.333 [2024-04-24 02:10:10.279677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.591 [2024-04-24 02:10:10.617543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.532  Copying: 512/512 [B] (average 500 kBps) 00:40:12.532 00:40:12.791 02:10:12 -- dd/posix.sh@49 -- # [[ k7r99ki6vy1u7x3o2y6b97ujoifjwmjjdakox03qfxtorp2z7t7c84l77sk45536v97vpha8ljhhb7p3epoba108g7ssf1262rt9id0mzeoblf6klmb8qe7cv4nuqg87lrngm06gb0vopq81p3s16ufn34vw72f40ks9m9h7kxw49kbgqiziibdnehdef5gm1gzz41tw850ixodwqoww70idtc3bqwqe66ui6vl35upyjajmgaxd0fl2h8eer0b62tmrn5u529oxqempdfi6smilclogeworm0paif07naf6p14lyfjhgjoiklmlij390s0k0vnimhuhnurrz5grwppa8ic3niph3vua3uhslolrr77wdbw03mz5105za899pexjkcnnp3jystz6jbmf0bt3s50sci5gr0w33kmd0c376zwbyloalrdzxbmop99euvmf9wr56coebyegaqjptvxmb9bkcm8g9wt8oxectmnhh9ynugjndf4ww7nc0vvu == \k\7\r\9\9\k\i\6\v\y\1\u\7\x\3\o\2\y\6\b\9\7\u\j\o\i\f\j\w\m\j\j\d\a\k\o\x\0\3\q\f\x\t\o\r\p\2\z\7\t\7\c\8\4\l\7\7\s\k\4\5\5\3\6\v\9\7\v\p\h\a\8\l\j\h\h\b\7\p\3\e\p\o\b\a\1\0\8\g\7\s\s\f\1\2\6\2\r\t\9\i\d\0\m\z\e\o\b\l\f\6\k\l\m\b\8\q\e\7\c\v\4\n\u\q\g\8\7\l\r\n\g\m\0\6\g\b\0\v\o\p\q\8\1\p\3\s\1\6\u\f\n\3\4\v\w\7\2\f\4\0\k\s\9\m\9\h\7\k\x\w\4\9\k\b\g\q\i\z\i\i\b\d\n\e\h\d\e\f\5\g\m\1\g\z\z\4\1\t\w\8\5\0\i\x\o\d\w\q\o\w\w\7\0\i\d\t\c\3\b\q\w\q\e\6\6\u\i\6\v\l\3\5\u\p\y\j\a\j\m\g\a\x\d\0\f\l\2\h\8\e\e\r\0\b\6\2\t\m\r\n\5\u\5\2\9\o\x\q\e\m\p\d\f\i\6\s\m\i\l\c\l\o\g\e\w\o\r\m\0\p\a\i\f\0\7\n\a\f\6\p\1\4\l\y\f\j\h\g\j\o\i\k\l\m\l\i\j\3\9\0\s\0\k\0\v\n\i\m\h\u\h\n\u\r\r\z\5\g\r\w\p\p\a\8\i\c\3\n\i\p\h\3\v\u\a\3\u\h\s\l\o\l\r\r\7\7\w\d\b\w\0\3\m\z\5\1\0\5\z\a\8\9\9\p\e\x\j\k\c\n\n\p\3\j\y\s\t\z\6\j\b\m\f\0\b\t\3\s\5\0\s\c\i\5\g\r\0\w\3\3\k\m\d\0\c\3\7\6\z\w\b\y\l\o\a\l\r\d\z\x\b\m\o\p\9\9\e\u\v\m\f\9\w\r\5\6\c\o\e\b\y\e\g\a\q\j\p\t\v\x\m\b\9\b\k\c\m\8\g\9\w\t\8\o\x\e\c\t\m\n\h\h\9\y\n\u\g\j\n\d\f\4\w\w\7\n\c\0\v\v\u ]] 00:40:12.791 00:40:12.791 real 0m7.371s 00:40:12.791 user 0m6.241s 00:40:12.791 sys 0m0.793s 00:40:12.791 ************************************ 00:40:12.791 END TEST dd_flag_nofollow_forced_aio 00:40:12.791 ************************************ 00:40:12.791 02:10:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:12.791 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:40:12.791 02:10:12 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:40:12.791 02:10:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:12.791 02:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:12.791 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:40:12.791 ************************************ 00:40:12.791 START TEST dd_flag_noatime_forced_aio 00:40:12.791 ************************************ 00:40:12.791 02:10:12 -- common/autotest_common.sh@1111 -- # noatime 00:40:12.791 02:10:12 -- dd/posix.sh@53 -- # local atime_if 00:40:12.791 02:10:12 -- dd/posix.sh@54 -- # local atime_of 00:40:12.791 02:10:12 -- dd/posix.sh@58 -- # gen_bytes 512 00:40:12.791 02:10:12 -- dd/common.sh@98 -- # xtrace_disable 00:40:12.791 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:40:12.791 02:10:12 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:12.791 02:10:12 -- dd/posix.sh@60 -- # atime_if=1713924611 00:40:12.791 02:10:12 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:12.791 02:10:12 -- dd/posix.sh@61 -- # atime_of=1713924612 00:40:12.791 02:10:12 -- dd/posix.sh@66 -- # sleep 1 00:40:13.725 02:10:13 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:13.983 [2024-04-24 02:10:13.811130] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:13.983 [2024-04-24 02:10:13.811296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144649 ] 00:40:13.983 [2024-04-24 02:10:13.972392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.241 [2024-04-24 02:10:14.258966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.212  Copying: 512/512 [B] (average 500 kBps) 00:40:16.212 00:40:16.212 02:10:16 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:16.212 02:10:16 -- dd/posix.sh@69 -- # (( atime_if == 1713924611 )) 00:40:16.212 02:10:16 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:16.469 02:10:16 -- dd/posix.sh@70 -- # (( atime_of == 1713924612 )) 00:40:16.469 02:10:16 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:16.469 [2024-04-24 02:10:16.368812] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:16.469 [2024-04-24 02:10:16.369005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144687 ] 00:40:16.469 [2024-04-24 02:10:16.547902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.033 [2024-04-24 02:10:16.882259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.189  Copying: 512/512 [B] (average 500 kBps) 00:40:19.189 00:40:19.189 02:10:18 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:19.189 02:10:18 -- dd/posix.sh@73 -- # (( atime_if < 1713924617 )) 00:40:19.189 00:40:19.189 real 0m6.190s 00:40:19.189 user 0m4.416s 00:40:19.189 sys 0m0.520s 00:40:19.189 02:10:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:19.189 ************************************ 00:40:19.189 END TEST dd_flag_noatime_forced_aio 00:40:19.189 ************************************ 00:40:19.189 02:10:18 -- common/autotest_common.sh@10 -- # set +x 00:40:19.190 02:10:18 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:40:19.190 02:10:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:19.190 02:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:19.190 02:10:18 -- common/autotest_common.sh@10 -- # set +x 00:40:19.190 ************************************ 00:40:19.190 START TEST dd_flags_misc_forced_aio 00:40:19.190 ************************************ 00:40:19.190 02:10:19 -- common/autotest_common.sh@1111 -- # io 00:40:19.190 02:10:19 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:19.190 02:10:19 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:19.190 02:10:19 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:19.190 02:10:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:19.190 02:10:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:40:19.190 02:10:19 -- dd/common.sh@98 -- # xtrace_disable 00:40:19.190 02:10:19 -- common/autotest_common.sh@10 -- # set +x 00:40:19.190 02:10:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:19.190 02:10:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:19.190 [2024-04-24 02:10:19.083591] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:19.190 [2024-04-24 02:10:19.083735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144748 ] 00:40:19.190 [2024-04-24 02:10:19.244767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.756 [2024-04-24 02:10:19.558227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.918  Copying: 512/512 [B] (average 500 kBps) 00:40:21.918 00:40:21.918 02:10:21 -- dd/posix.sh@93 -- # [[ e5zi4f8xj35vt8er40bvt7aut9kueg3fg3nfmke8et96erfeu6mt16mii1kf7ko2xbomtjshzsbvt1b0i7dngrpi34zbz8k1md5984o8hj2is0v5nfoqpo8q0hhea82dld3ftmfvryqt3hkzw0jdfwh8ls6kd91np2mdeaoa6cgz8b6hezx38zgzbw37hblhvil4unrw4dhswmhml3tf9sjgpqbee8rj8nqvnmdmvula2h3zudhdy6jpkk60o9h1tjxgu8lu50wz0p2opvv6k8cszpx0ybnrggh4e19k0gf80uy163kqrt8pan1gqjcbwu1zyfpl451uuxb6yejiqnmoaczerysywdtpihep165in0vxzdcz1b1ja6163fz2eda6eusosw8p83nw6uzfautq6nbidfxcbzwgw4xvi9tmhzzo5eby9vxvky9zcw2gtz7mm83c3kqkz9mdwp0p6boeibj1kpt4958x39lw9b8jzmmirl84un8dzwgs400e == \e\5\z\i\4\f\8\x\j\3\5\v\t\8\e\r\4\0\b\v\t\7\a\u\t\9\k\u\e\g\3\f\g\3\n\f\m\k\e\8\e\t\9\6\e\r\f\e\u\6\m\t\1\6\m\i\i\1\k\f\7\k\o\2\x\b\o\m\t\j\s\h\z\s\b\v\t\1\b\0\i\7\d\n\g\r\p\i\3\4\z\b\z\8\k\1\m\d\5\9\8\4\o\8\h\j\2\i\s\0\v\5\n\f\o\q\p\o\8\q\0\h\h\e\a\8\2\d\l\d\3\f\t\m\f\v\r\y\q\t\3\h\k\z\w\0\j\d\f\w\h\8\l\s\6\k\d\9\1\n\p\2\m\d\e\a\o\a\6\c\g\z\8\b\6\h\e\z\x\3\8\z\g\z\b\w\3\7\h\b\l\h\v\i\l\4\u\n\r\w\4\d\h\s\w\m\h\m\l\3\t\f\9\s\j\g\p\q\b\e\e\8\r\j\8\n\q\v\n\m\d\m\v\u\l\a\2\h\3\z\u\d\h\d\y\6\j\p\k\k\6\0\o\9\h\1\t\j\x\g\u\8\l\u\5\0\w\z\0\p\2\o\p\v\v\6\k\8\c\s\z\p\x\0\y\b\n\r\g\g\h\4\e\1\9\k\0\g\f\8\0\u\y\1\6\3\k\q\r\t\8\p\a\n\1\g\q\j\c\b\w\u\1\z\y\f\p\l\4\5\1\u\u\x\b\6\y\e\j\i\q\n\m\o\a\c\z\e\r\y\s\y\w\d\t\p\i\h\e\p\1\6\5\i\n\0\v\x\z\d\c\z\1\b\1\j\a\6\1\6\3\f\z\2\e\d\a\6\e\u\s\o\s\w\8\p\8\3\n\w\6\u\z\f\a\u\t\q\6\n\b\i\d\f\x\c\b\z\w\g\w\4\x\v\i\9\t\m\h\z\z\o\5\e\b\y\9\v\x\v\k\y\9\z\c\w\2\g\t\z\7\m\m\8\3\c\3\k\q\k\z\9\m\d\w\p\0\p\6\b\o\e\i\b\j\1\k\p\t\4\9\5\8\x\3\9\l\w\9\b\8\j\z\m\m\i\r\l\8\4\u\n\8\d\z\w\g\s\4\0\0\e ]] 00:40:21.918 02:10:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:21.918 02:10:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:21.918 [2024-04-24 02:10:21.611427] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:21.918 [2024-04-24 02:10:21.611649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144781 ] 00:40:21.918 [2024-04-24 02:10:21.788898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.177 [2024-04-24 02:10:22.095317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.336  Copying: 512/512 [B] (average 500 kBps) 00:40:24.336 00:40:24.336 02:10:23 -- dd/posix.sh@93 -- # [[ e5zi4f8xj35vt8er40bvt7aut9kueg3fg3nfmke8et96erfeu6mt16mii1kf7ko2xbomtjshzsbvt1b0i7dngrpi34zbz8k1md5984o8hj2is0v5nfoqpo8q0hhea82dld3ftmfvryqt3hkzw0jdfwh8ls6kd91np2mdeaoa6cgz8b6hezx38zgzbw37hblhvil4unrw4dhswmhml3tf9sjgpqbee8rj8nqvnmdmvula2h3zudhdy6jpkk60o9h1tjxgu8lu50wz0p2opvv6k8cszpx0ybnrggh4e19k0gf80uy163kqrt8pan1gqjcbwu1zyfpl451uuxb6yejiqnmoaczerysywdtpihep165in0vxzdcz1b1ja6163fz2eda6eusosw8p83nw6uzfautq6nbidfxcbzwgw4xvi9tmhzzo5eby9vxvky9zcw2gtz7mm83c3kqkz9mdwp0p6boeibj1kpt4958x39lw9b8jzmmirl84un8dzwgs400e == \e\5\z\i\4\f\8\x\j\3\5\v\t\8\e\r\4\0\b\v\t\7\a\u\t\9\k\u\e\g\3\f\g\3\n\f\m\k\e\8\e\t\9\6\e\r\f\e\u\6\m\t\1\6\m\i\i\1\k\f\7\k\o\2\x\b\o\m\t\j\s\h\z\s\b\v\t\1\b\0\i\7\d\n\g\r\p\i\3\4\z\b\z\8\k\1\m\d\5\9\8\4\o\8\h\j\2\i\s\0\v\5\n\f\o\q\p\o\8\q\0\h\h\e\a\8\2\d\l\d\3\f\t\m\f\v\r\y\q\t\3\h\k\z\w\0\j\d\f\w\h\8\l\s\6\k\d\9\1\n\p\2\m\d\e\a\o\a\6\c\g\z\8\b\6\h\e\z\x\3\8\z\g\z\b\w\3\7\h\b\l\h\v\i\l\4\u\n\r\w\4\d\h\s\w\m\h\m\l\3\t\f\9\s\j\g\p\q\b\e\e\8\r\j\8\n\q\v\n\m\d\m\v\u\l\a\2\h\3\z\u\d\h\d\y\6\j\p\k\k\6\0\o\9\h\1\t\j\x\g\u\8\l\u\5\0\w\z\0\p\2\o\p\v\v\6\k\8\c\s\z\p\x\0\y\b\n\r\g\g\h\4\e\1\9\k\0\g\f\8\0\u\y\1\6\3\k\q\r\t\8\p\a\n\1\g\q\j\c\b\w\u\1\z\y\f\p\l\4\5\1\u\u\x\b\6\y\e\j\i\q\n\m\o\a\c\z\e\r\y\s\y\w\d\t\p\i\h\e\p\1\6\5\i\n\0\v\x\z\d\c\z\1\b\1\j\a\6\1\6\3\f\z\2\e\d\a\6\e\u\s\o\s\w\8\p\8\3\n\w\6\u\z\f\a\u\t\q\6\n\b\i\d\f\x\c\b\z\w\g\w\4\x\v\i\9\t\m\h\z\z\o\5\e\b\y\9\v\x\v\k\y\9\z\c\w\2\g\t\z\7\m\m\8\3\c\3\k\q\k\z\9\m\d\w\p\0\p\6\b\o\e\i\b\j\1\k\p\t\4\9\5\8\x\3\9\l\w\9\b\8\j\z\m\m\i\r\l\8\4\u\n\8\d\z\w\g\s\4\0\0\e ]] 00:40:24.336 02:10:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:24.336 02:10:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:24.336 [2024-04-24 02:10:24.025465] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:24.336 [2024-04-24 02:10:24.025680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144810 ] 00:40:24.336 [2024-04-24 02:10:24.204938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.595 [2024-04-24 02:10:24.446026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.757  Copying: 512/512 [B] (average 166 kBps) 00:40:26.757 00:40:26.757 02:10:26 -- dd/posix.sh@93 -- # [[ e5zi4f8xj35vt8er40bvt7aut9kueg3fg3nfmke8et96erfeu6mt16mii1kf7ko2xbomtjshzsbvt1b0i7dngrpi34zbz8k1md5984o8hj2is0v5nfoqpo8q0hhea82dld3ftmfvryqt3hkzw0jdfwh8ls6kd91np2mdeaoa6cgz8b6hezx38zgzbw37hblhvil4unrw4dhswmhml3tf9sjgpqbee8rj8nqvnmdmvula2h3zudhdy6jpkk60o9h1tjxgu8lu50wz0p2opvv6k8cszpx0ybnrggh4e19k0gf80uy163kqrt8pan1gqjcbwu1zyfpl451uuxb6yejiqnmoaczerysywdtpihep165in0vxzdcz1b1ja6163fz2eda6eusosw8p83nw6uzfautq6nbidfxcbzwgw4xvi9tmhzzo5eby9vxvky9zcw2gtz7mm83c3kqkz9mdwp0p6boeibj1kpt4958x39lw9b8jzmmirl84un8dzwgs400e == \e\5\z\i\4\f\8\x\j\3\5\v\t\8\e\r\4\0\b\v\t\7\a\u\t\9\k\u\e\g\3\f\g\3\n\f\m\k\e\8\e\t\9\6\e\r\f\e\u\6\m\t\1\6\m\i\i\1\k\f\7\k\o\2\x\b\o\m\t\j\s\h\z\s\b\v\t\1\b\0\i\7\d\n\g\r\p\i\3\4\z\b\z\8\k\1\m\d\5\9\8\4\o\8\h\j\2\i\s\0\v\5\n\f\o\q\p\o\8\q\0\h\h\e\a\8\2\d\l\d\3\f\t\m\f\v\r\y\q\t\3\h\k\z\w\0\j\d\f\w\h\8\l\s\6\k\d\9\1\n\p\2\m\d\e\a\o\a\6\c\g\z\8\b\6\h\e\z\x\3\8\z\g\z\b\w\3\7\h\b\l\h\v\i\l\4\u\n\r\w\4\d\h\s\w\m\h\m\l\3\t\f\9\s\j\g\p\q\b\e\e\8\r\j\8\n\q\v\n\m\d\m\v\u\l\a\2\h\3\z\u\d\h\d\y\6\j\p\k\k\6\0\o\9\h\1\t\j\x\g\u\8\l\u\5\0\w\z\0\p\2\o\p\v\v\6\k\8\c\s\z\p\x\0\y\b\n\r\g\g\h\4\e\1\9\k\0\g\f\8\0\u\y\1\6\3\k\q\r\t\8\p\a\n\1\g\q\j\c\b\w\u\1\z\y\f\p\l\4\5\1\u\u\x\b\6\y\e\j\i\q\n\m\o\a\c\z\e\r\y\s\y\w\d\t\p\i\h\e\p\1\6\5\i\n\0\v\x\z\d\c\z\1\b\1\j\a\6\1\6\3\f\z\2\e\d\a\6\e\u\s\o\s\w\8\p\8\3\n\w\6\u\z\f\a\u\t\q\6\n\b\i\d\f\x\c\b\z\w\g\w\4\x\v\i\9\t\m\h\z\z\o\5\e\b\y\9\v\x\v\k\y\9\z\c\w\2\g\t\z\7\m\m\8\3\c\3\k\q\k\z\9\m\d\w\p\0\p\6\b\o\e\i\b\j\1\k\p\t\4\9\5\8\x\3\9\l\w\9\b\8\j\z\m\m\i\r\l\8\4\u\n\8\d\z\w\g\s\4\0\0\e ]] 00:40:26.757 02:10:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:26.757 02:10:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:26.757 [2024-04-24 02:10:26.443699] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:26.757 [2024-04-24 02:10:26.444453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144846 ] 00:40:26.757 [2024-04-24 02:10:26.646206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.035 [2024-04-24 02:10:26.961606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.194  Copying: 512/512 [B] (average 250 kBps) 00:40:29.194 00:40:29.194 02:10:28 -- dd/posix.sh@93 -- # [[ e5zi4f8xj35vt8er40bvt7aut9kueg3fg3nfmke8et96erfeu6mt16mii1kf7ko2xbomtjshzsbvt1b0i7dngrpi34zbz8k1md5984o8hj2is0v5nfoqpo8q0hhea82dld3ftmfvryqt3hkzw0jdfwh8ls6kd91np2mdeaoa6cgz8b6hezx38zgzbw37hblhvil4unrw4dhswmhml3tf9sjgpqbee8rj8nqvnmdmvula2h3zudhdy6jpkk60o9h1tjxgu8lu50wz0p2opvv6k8cszpx0ybnrggh4e19k0gf80uy163kqrt8pan1gqjcbwu1zyfpl451uuxb6yejiqnmoaczerysywdtpihep165in0vxzdcz1b1ja6163fz2eda6eusosw8p83nw6uzfautq6nbidfxcbzwgw4xvi9tmhzzo5eby9vxvky9zcw2gtz7mm83c3kqkz9mdwp0p6boeibj1kpt4958x39lw9b8jzmmirl84un8dzwgs400e == \e\5\z\i\4\f\8\x\j\3\5\v\t\8\e\r\4\0\b\v\t\7\a\u\t\9\k\u\e\g\3\f\g\3\n\f\m\k\e\8\e\t\9\6\e\r\f\e\u\6\m\t\1\6\m\i\i\1\k\f\7\k\o\2\x\b\o\m\t\j\s\h\z\s\b\v\t\1\b\0\i\7\d\n\g\r\p\i\3\4\z\b\z\8\k\1\m\d\5\9\8\4\o\8\h\j\2\i\s\0\v\5\n\f\o\q\p\o\8\q\0\h\h\e\a\8\2\d\l\d\3\f\t\m\f\v\r\y\q\t\3\h\k\z\w\0\j\d\f\w\h\8\l\s\6\k\d\9\1\n\p\2\m\d\e\a\o\a\6\c\g\z\8\b\6\h\e\z\x\3\8\z\g\z\b\w\3\7\h\b\l\h\v\i\l\4\u\n\r\w\4\d\h\s\w\m\h\m\l\3\t\f\9\s\j\g\p\q\b\e\e\8\r\j\8\n\q\v\n\m\d\m\v\u\l\a\2\h\3\z\u\d\h\d\y\6\j\p\k\k\6\0\o\9\h\1\t\j\x\g\u\8\l\u\5\0\w\z\0\p\2\o\p\v\v\6\k\8\c\s\z\p\x\0\y\b\n\r\g\g\h\4\e\1\9\k\0\g\f\8\0\u\y\1\6\3\k\q\r\t\8\p\a\n\1\g\q\j\c\b\w\u\1\z\y\f\p\l\4\5\1\u\u\x\b\6\y\e\j\i\q\n\m\o\a\c\z\e\r\y\s\y\w\d\t\p\i\h\e\p\1\6\5\i\n\0\v\x\z\d\c\z\1\b\1\j\a\6\1\6\3\f\z\2\e\d\a\6\e\u\s\o\s\w\8\p\8\3\n\w\6\u\z\f\a\u\t\q\6\n\b\i\d\f\x\c\b\z\w\g\w\4\x\v\i\9\t\m\h\z\z\o\5\e\b\y\9\v\x\v\k\y\9\z\c\w\2\g\t\z\7\m\m\8\3\c\3\k\q\k\z\9\m\d\w\p\0\p\6\b\o\e\i\b\j\1\k\p\t\4\9\5\8\x\3\9\l\w\9\b\8\j\z\m\m\i\r\l\8\4\u\n\8\d\z\w\g\s\4\0\0\e ]] 00:40:29.194 02:10:28 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:29.194 02:10:28 -- dd/posix.sh@86 -- # gen_bytes 512 00:40:29.194 02:10:28 -- dd/common.sh@98 -- # xtrace_disable 00:40:29.194 02:10:28 -- common/autotest_common.sh@10 -- # set +x 00:40:29.194 02:10:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:29.194 02:10:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:29.194 [2024-04-24 02:10:29.019607] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:29.194 [2024-04-24 02:10:29.019796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144875 ] 00:40:29.194 [2024-04-24 02:10:29.200043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.526 [2024-04-24 02:10:29.516948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.467  Copying: 512/512 [B] (average 500 kBps) 00:40:31.467 00:40:31.467 02:10:31 -- dd/posix.sh@93 -- # [[ hraugu1hfwrhhkbm5fks9dg4owxkqx8joafx60c0ro7hrb53ro6vcrf5i48nznrytxxs80kzyyoat1dkq39mxcuk98h06ftyj75ab36oq3py465p7d1bpgjk5kij8ol9a3luh84mj17xabtwzozruywhhpoaz2sm7yov5123vfnqrpqv526tq0grex3gb3jz3fuwrcg70cxo72t1zpd6yd6eafbbc53vlpv6bt9d41rk2h8yw713d8rcggfd842vil84z3y6yslsvqni7ozoxt0d23rki8ly94vwyhfwwwffw9vpy42xanx6pns6sd5tw19d5rf8ub2xcev5p2i3e6yj278ssa6ou7m1exz6x1f9jh5w3kddciibokq5tm787zbxwbaw6mp5mf5n9o9od4ry4i28dpgl115zeafp357lml5x34vqq39kj3se4sbivn3nlzxfhzop42xwbin5tg55lccj8xxxyykzztkudpxd4joaunr29gej0l9q9kvs == \h\r\a\u\g\u\1\h\f\w\r\h\h\k\b\m\5\f\k\s\9\d\g\4\o\w\x\k\q\x\8\j\o\a\f\x\6\0\c\0\r\o\7\h\r\b\5\3\r\o\6\v\c\r\f\5\i\4\8\n\z\n\r\y\t\x\x\s\8\0\k\z\y\y\o\a\t\1\d\k\q\3\9\m\x\c\u\k\9\8\h\0\6\f\t\y\j\7\5\a\b\3\6\o\q\3\p\y\4\6\5\p\7\d\1\b\p\g\j\k\5\k\i\j\8\o\l\9\a\3\l\u\h\8\4\m\j\1\7\x\a\b\t\w\z\o\z\r\u\y\w\h\h\p\o\a\z\2\s\m\7\y\o\v\5\1\2\3\v\f\n\q\r\p\q\v\5\2\6\t\q\0\g\r\e\x\3\g\b\3\j\z\3\f\u\w\r\c\g\7\0\c\x\o\7\2\t\1\z\p\d\6\y\d\6\e\a\f\b\b\c\5\3\v\l\p\v\6\b\t\9\d\4\1\r\k\2\h\8\y\w\7\1\3\d\8\r\c\g\g\f\d\8\4\2\v\i\l\8\4\z\3\y\6\y\s\l\s\v\q\n\i\7\o\z\o\x\t\0\d\2\3\r\k\i\8\l\y\9\4\v\w\y\h\f\w\w\w\f\f\w\9\v\p\y\4\2\x\a\n\x\6\p\n\s\6\s\d\5\t\w\1\9\d\5\r\f\8\u\b\2\x\c\e\v\5\p\2\i\3\e\6\y\j\2\7\8\s\s\a\6\o\u\7\m\1\e\x\z\6\x\1\f\9\j\h\5\w\3\k\d\d\c\i\i\b\o\k\q\5\t\m\7\8\7\z\b\x\w\b\a\w\6\m\p\5\m\f\5\n\9\o\9\o\d\4\r\y\4\i\2\8\d\p\g\l\1\1\5\z\e\a\f\p\3\5\7\l\m\l\5\x\3\4\v\q\q\3\9\k\j\3\s\e\4\s\b\i\v\n\3\n\l\z\x\f\h\z\o\p\4\2\x\w\b\i\n\5\t\g\5\5\l\c\c\j\8\x\x\x\y\y\k\z\z\t\k\u\d\p\x\d\4\j\o\a\u\n\r\2\9\g\e\j\0\l\9\q\9\k\v\s ]] 00:40:31.467 02:10:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:31.467 02:10:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:31.467 [2024-04-24 02:10:31.527842] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:31.467 [2024-04-24 02:10:31.528050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144910 ] 00:40:31.724 [2024-04-24 02:10:31.709746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.982 [2024-04-24 02:10:31.962309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.446  Copying: 512/512 [B] (average 500 kBps) 00:40:34.446 00:40:34.446 02:10:34 -- dd/posix.sh@93 -- # [[ hraugu1hfwrhhkbm5fks9dg4owxkqx8joafx60c0ro7hrb53ro6vcrf5i48nznrytxxs80kzyyoat1dkq39mxcuk98h06ftyj75ab36oq3py465p7d1bpgjk5kij8ol9a3luh84mj17xabtwzozruywhhpoaz2sm7yov5123vfnqrpqv526tq0grex3gb3jz3fuwrcg70cxo72t1zpd6yd6eafbbc53vlpv6bt9d41rk2h8yw713d8rcggfd842vil84z3y6yslsvqni7ozoxt0d23rki8ly94vwyhfwwwffw9vpy42xanx6pns6sd5tw19d5rf8ub2xcev5p2i3e6yj278ssa6ou7m1exz6x1f9jh5w3kddciibokq5tm787zbxwbaw6mp5mf5n9o9od4ry4i28dpgl115zeafp357lml5x34vqq39kj3se4sbivn3nlzxfhzop42xwbin5tg55lccj8xxxyykzztkudpxd4joaunr29gej0l9q9kvs == \h\r\a\u\g\u\1\h\f\w\r\h\h\k\b\m\5\f\k\s\9\d\g\4\o\w\x\k\q\x\8\j\o\a\f\x\6\0\c\0\r\o\7\h\r\b\5\3\r\o\6\v\c\r\f\5\i\4\8\n\z\n\r\y\t\x\x\s\8\0\k\z\y\y\o\a\t\1\d\k\q\3\9\m\x\c\u\k\9\8\h\0\6\f\t\y\j\7\5\a\b\3\6\o\q\3\p\y\4\6\5\p\7\d\1\b\p\g\j\k\5\k\i\j\8\o\l\9\a\3\l\u\h\8\4\m\j\1\7\x\a\b\t\w\z\o\z\r\u\y\w\h\h\p\o\a\z\2\s\m\7\y\o\v\5\1\2\3\v\f\n\q\r\p\q\v\5\2\6\t\q\0\g\r\e\x\3\g\b\3\j\z\3\f\u\w\r\c\g\7\0\c\x\o\7\2\t\1\z\p\d\6\y\d\6\e\a\f\b\b\c\5\3\v\l\p\v\6\b\t\9\d\4\1\r\k\2\h\8\y\w\7\1\3\d\8\r\c\g\g\f\d\8\4\2\v\i\l\8\4\z\3\y\6\y\s\l\s\v\q\n\i\7\o\z\o\x\t\0\d\2\3\r\k\i\8\l\y\9\4\v\w\y\h\f\w\w\w\f\f\w\9\v\p\y\4\2\x\a\n\x\6\p\n\s\6\s\d\5\t\w\1\9\d\5\r\f\8\u\b\2\x\c\e\v\5\p\2\i\3\e\6\y\j\2\7\8\s\s\a\6\o\u\7\m\1\e\x\z\6\x\1\f\9\j\h\5\w\3\k\d\d\c\i\i\b\o\k\q\5\t\m\7\8\7\z\b\x\w\b\a\w\6\m\p\5\m\f\5\n\9\o\9\o\d\4\r\y\4\i\2\8\d\p\g\l\1\1\5\z\e\a\f\p\3\5\7\l\m\l\5\x\3\4\v\q\q\3\9\k\j\3\s\e\4\s\b\i\v\n\3\n\l\z\x\f\h\z\o\p\4\2\x\w\b\i\n\5\t\g\5\5\l\c\c\j\8\x\x\x\y\y\k\z\z\t\k\u\d\p\x\d\4\j\o\a\u\n\r\2\9\g\e\j\0\l\9\q\9\k\v\s ]] 00:40:34.446 02:10:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:34.446 02:10:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:34.446 [2024-04-24 02:10:34.125896] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:34.446 [2024-04-24 02:10:34.126150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144939 ] 00:40:34.446 [2024-04-24 02:10:34.315625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.703 [2024-04-24 02:10:34.633749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.168  Copying: 512/512 [B] (average 250 kBps) 00:40:37.168 00:40:37.168 02:10:36 -- dd/posix.sh@93 -- # [[ hraugu1hfwrhhkbm5fks9dg4owxkqx8joafx60c0ro7hrb53ro6vcrf5i48nznrytxxs80kzyyoat1dkq39mxcuk98h06ftyj75ab36oq3py465p7d1bpgjk5kij8ol9a3luh84mj17xabtwzozruywhhpoaz2sm7yov5123vfnqrpqv526tq0grex3gb3jz3fuwrcg70cxo72t1zpd6yd6eafbbc53vlpv6bt9d41rk2h8yw713d8rcggfd842vil84z3y6yslsvqni7ozoxt0d23rki8ly94vwyhfwwwffw9vpy42xanx6pns6sd5tw19d5rf8ub2xcev5p2i3e6yj278ssa6ou7m1exz6x1f9jh5w3kddciibokq5tm787zbxwbaw6mp5mf5n9o9od4ry4i28dpgl115zeafp357lml5x34vqq39kj3se4sbivn3nlzxfhzop42xwbin5tg55lccj8xxxyykzztkudpxd4joaunr29gej0l9q9kvs == \h\r\a\u\g\u\1\h\f\w\r\h\h\k\b\m\5\f\k\s\9\d\g\4\o\w\x\k\q\x\8\j\o\a\f\x\6\0\c\0\r\o\7\h\r\b\5\3\r\o\6\v\c\r\f\5\i\4\8\n\z\n\r\y\t\x\x\s\8\0\k\z\y\y\o\a\t\1\d\k\q\3\9\m\x\c\u\k\9\8\h\0\6\f\t\y\j\7\5\a\b\3\6\o\q\3\p\y\4\6\5\p\7\d\1\b\p\g\j\k\5\k\i\j\8\o\l\9\a\3\l\u\h\8\4\m\j\1\7\x\a\b\t\w\z\o\z\r\u\y\w\h\h\p\o\a\z\2\s\m\7\y\o\v\5\1\2\3\v\f\n\q\r\p\q\v\5\2\6\t\q\0\g\r\e\x\3\g\b\3\j\z\3\f\u\w\r\c\g\7\0\c\x\o\7\2\t\1\z\p\d\6\y\d\6\e\a\f\b\b\c\5\3\v\l\p\v\6\b\t\9\d\4\1\r\k\2\h\8\y\w\7\1\3\d\8\r\c\g\g\f\d\8\4\2\v\i\l\8\4\z\3\y\6\y\s\l\s\v\q\n\i\7\o\z\o\x\t\0\d\2\3\r\k\i\8\l\y\9\4\v\w\y\h\f\w\w\w\f\f\w\9\v\p\y\4\2\x\a\n\x\6\p\n\s\6\s\d\5\t\w\1\9\d\5\r\f\8\u\b\2\x\c\e\v\5\p\2\i\3\e\6\y\j\2\7\8\s\s\a\6\o\u\7\m\1\e\x\z\6\x\1\f\9\j\h\5\w\3\k\d\d\c\i\i\b\o\k\q\5\t\m\7\8\7\z\b\x\w\b\a\w\6\m\p\5\m\f\5\n\9\o\9\o\d\4\r\y\4\i\2\8\d\p\g\l\1\1\5\z\e\a\f\p\3\5\7\l\m\l\5\x\3\4\v\q\q\3\9\k\j\3\s\e\4\s\b\i\v\n\3\n\l\z\x\f\h\z\o\p\4\2\x\w\b\i\n\5\t\g\5\5\l\c\c\j\8\x\x\x\y\y\k\z\z\t\k\u\d\p\x\d\4\j\o\a\u\n\r\2\9\g\e\j\0\l\9\q\9\k\v\s ]] 00:40:37.168 02:10:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:37.168 02:10:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:37.168 [2024-04-24 02:10:36.886621] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:37.168 [2024-04-24 02:10:36.887574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144975 ] 00:40:37.168 [2024-04-24 02:10:37.071142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.426 [2024-04-24 02:10:37.409883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.892  Copying: 512/512 [B] (average 250 kBps) 00:40:39.892 00:40:39.892 ************************************ 00:40:39.892 END TEST dd_flags_misc_forced_aio 00:40:39.892 ************************************ 00:40:39.892 02:10:39 -- dd/posix.sh@93 -- # [[ hraugu1hfwrhhkbm5fks9dg4owxkqx8joafx60c0ro7hrb53ro6vcrf5i48nznrytxxs80kzyyoat1dkq39mxcuk98h06ftyj75ab36oq3py465p7d1bpgjk5kij8ol9a3luh84mj17xabtwzozruywhhpoaz2sm7yov5123vfnqrpqv526tq0grex3gb3jz3fuwrcg70cxo72t1zpd6yd6eafbbc53vlpv6bt9d41rk2h8yw713d8rcggfd842vil84z3y6yslsvqni7ozoxt0d23rki8ly94vwyhfwwwffw9vpy42xanx6pns6sd5tw19d5rf8ub2xcev5p2i3e6yj278ssa6ou7m1exz6x1f9jh5w3kddciibokq5tm787zbxwbaw6mp5mf5n9o9od4ry4i28dpgl115zeafp357lml5x34vqq39kj3se4sbivn3nlzxfhzop42xwbin5tg55lccj8xxxyykzztkudpxd4joaunr29gej0l9q9kvs == \h\r\a\u\g\u\1\h\f\w\r\h\h\k\b\m\5\f\k\s\9\d\g\4\o\w\x\k\q\x\8\j\o\a\f\x\6\0\c\0\r\o\7\h\r\b\5\3\r\o\6\v\c\r\f\5\i\4\8\n\z\n\r\y\t\x\x\s\8\0\k\z\y\y\o\a\t\1\d\k\q\3\9\m\x\c\u\k\9\8\h\0\6\f\t\y\j\7\5\a\b\3\6\o\q\3\p\y\4\6\5\p\7\d\1\b\p\g\j\k\5\k\i\j\8\o\l\9\a\3\l\u\h\8\4\m\j\1\7\x\a\b\t\w\z\o\z\r\u\y\w\h\h\p\o\a\z\2\s\m\7\y\o\v\5\1\2\3\v\f\n\q\r\p\q\v\5\2\6\t\q\0\g\r\e\x\3\g\b\3\j\z\3\f\u\w\r\c\g\7\0\c\x\o\7\2\t\1\z\p\d\6\y\d\6\e\a\f\b\b\c\5\3\v\l\p\v\6\b\t\9\d\4\1\r\k\2\h\8\y\w\7\1\3\d\8\r\c\g\g\f\d\8\4\2\v\i\l\8\4\z\3\y\6\y\s\l\s\v\q\n\i\7\o\z\o\x\t\0\d\2\3\r\k\i\8\l\y\9\4\v\w\y\h\f\w\w\w\f\f\w\9\v\p\y\4\2\x\a\n\x\6\p\n\s\6\s\d\5\t\w\1\9\d\5\r\f\8\u\b\2\x\c\e\v\5\p\2\i\3\e\6\y\j\2\7\8\s\s\a\6\o\u\7\m\1\e\x\z\6\x\1\f\9\j\h\5\w\3\k\d\d\c\i\i\b\o\k\q\5\t\m\7\8\7\z\b\x\w\b\a\w\6\m\p\5\m\f\5\n\9\o\9\o\d\4\r\y\4\i\2\8\d\p\g\l\1\1\5\z\e\a\f\p\3\5\7\l\m\l\5\x\3\4\v\q\q\3\9\k\j\3\s\e\4\s\b\i\v\n\3\n\l\z\x\f\h\z\o\p\4\2\x\w\b\i\n\5\t\g\5\5\l\c\c\j\8\x\x\x\y\y\k\z\z\t\k\u\d\p\x\d\4\j\o\a\u\n\r\2\9\g\e\j\0\l\9\q\9\k\v\s ]] 00:40:39.892 00:40:39.892 real 0m20.517s 00:40:39.892 user 0m17.475s 00:40:39.892 sys 0m1.983s 00:40:39.892 02:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:39.892 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:40:39.892 02:10:39 -- dd/posix.sh@1 -- # cleanup 00:40:39.892 02:10:39 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:39.892 02:10:39 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:39.892 00:40:39.892 real 1m21.947s 00:40:39.892 user 1m7.679s 00:40:39.892 sys 0m8.293s 00:40:39.892 02:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:39.892 ************************************ 00:40:39.892 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:40:39.892 END TEST spdk_dd_posix 00:40:39.892 ************************************ 00:40:39.892 02:10:39 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:40:39.892 02:10:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:39.892 02:10:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:39.892 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:40:39.892 ************************************ 00:40:39.892 START TEST spdk_dd_malloc 00:40:39.892 ************************************ 00:40:39.892 02:10:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:40:39.892 * Looking for test storage... 00:40:39.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:39.892 02:10:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:39.892 02:10:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.892 02:10:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.892 02:10:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.892 02:10:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:39.892 02:10:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:39.892 02:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:39.892 02:10:39 -- paths/export.sh@5 -- # export PATH 00:40:39.892 02:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:39.892 02:10:39 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:40:39.892 02:10:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:39.892 02:10:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:39.892 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:40:39.892 ************************************ 00:40:39.892 START TEST dd_malloc_copy 00:40:39.892 ************************************ 00:40:39.892 02:10:39 -- common/autotest_common.sh@1111 -- # malloc_copy 00:40:39.892 02:10:39 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:40:39.892 02:10:39 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:40:39.892 02:10:39 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:40:39.892 02:10:39 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:40:39.892 02:10:39 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:40:39.892 02:10:39 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:40:39.892 02:10:39 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:40:39.892 02:10:39 -- dd/malloc.sh@28 -- # gen_conf 00:40:39.892 02:10:39 -- dd/common.sh@31 -- # xtrace_disable 00:40:39.892 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:40:39.892 { 00:40:39.892 "subsystems": [ 00:40:39.892 { 00:40:39.892 "subsystem": "bdev", 00:40:39.892 "config": [ 00:40:39.892 { 00:40:39.892 "params": { 00:40:39.892 "block_size": 512, 00:40:39.892 "num_blocks": 1048576, 00:40:39.892 "name": "malloc0" 00:40:39.892 }, 00:40:39.892 "method": "bdev_malloc_create" 00:40:39.892 }, 00:40:39.892 { 00:40:39.892 "params": { 00:40:39.892 "block_size": 512, 00:40:39.892 "num_blocks": 1048576, 00:40:39.892 "name": "malloc1" 00:40:39.892 }, 00:40:39.892 "method": "bdev_malloc_create" 00:40:39.892 }, 00:40:39.892 { 00:40:39.892 "method": "bdev_wait_for_examine" 00:40:39.892 } 00:40:39.892 ] 00:40:39.892 } 00:40:39.892 ] 00:40:39.892 } 00:40:39.892 [2024-04-24 02:10:39.913763] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:39.892 [2024-04-24 02:10:39.913949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145097 ] 00:40:40.150 [2024-04-24 02:10:40.097069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.407 [2024-04-24 02:10:40.419968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.118  Copying: 222/512 [MB] (222 MBps) Copying: 439/512 [MB] (216 MBps) Copying: 512/512 [MB] (average 219 MBps) 00:40:50.118 00:40:50.118 02:10:49 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:40:50.118 02:10:49 -- dd/malloc.sh@33 -- # gen_conf 00:40:50.118 02:10:49 -- dd/common.sh@31 -- # xtrace_disable 00:40:50.118 02:10:49 -- common/autotest_common.sh@10 -- # set +x 00:40:50.118 { 00:40:50.118 "subsystems": [ 00:40:50.118 { 00:40:50.118 "subsystem": "bdev", 00:40:50.118 "config": [ 00:40:50.118 { 00:40:50.118 "params": { 00:40:50.118 "block_size": 512, 00:40:50.118 "num_blocks": 1048576, 00:40:50.118 "name": "malloc0" 00:40:50.118 }, 00:40:50.118 "method": "bdev_malloc_create" 00:40:50.118 }, 00:40:50.118 { 00:40:50.118 "params": { 00:40:50.118 "block_size": 512, 00:40:50.118 "num_blocks": 1048576, 00:40:50.118 "name": "malloc1" 00:40:50.118 }, 00:40:50.118 "method": "bdev_malloc_create" 00:40:50.118 }, 00:40:50.118 { 00:40:50.118 "method": "bdev_wait_for_examine" 00:40:50.118 } 00:40:50.118 ] 00:40:50.118 } 00:40:50.118 ] 00:40:50.118 } 00:40:50.118 [2024-04-24 02:10:49.760012] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:40:50.118 [2024-04-24 02:10:49.760218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145217 ] 00:40:50.118 [2024-04-24 02:10:49.941533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.118 [2024-04-24 02:10:50.187241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.173  Copying: 195/512 [MB] (195 MBps) Copying: 395/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:41:00.173 00:41:00.173 00:41:00.173 real 0m19.857s 00:41:00.173 user 0m18.587s 00:41:00.173 sys 0m1.107s 00:41:00.173 02:10:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:00.173 ************************************ 00:41:00.173 END TEST dd_malloc_copy 00:41:00.173 ************************************ 00:41:00.173 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:41:00.173 00:41:00.173 real 0m20.043s 00:41:00.173 user 0m18.677s 00:41:00.173 sys 0m1.210s 00:41:00.173 02:10:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:00.173 ************************************ 00:41:00.173 END TEST spdk_dd_malloc 00:41:00.173 ************************************ 00:41:00.173 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:41:00.174 02:10:59 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:00.174 02:10:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:41:00.174 02:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:00.174 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:41:00.174 ************************************ 00:41:00.174 START TEST spdk_dd_bdev_to_bdev 00:41:00.174 ************************************ 00:41:00.174 02:10:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:00.174 * Looking for test storage... 00:41:00.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:00.174 02:10:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:00.174 02:10:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.174 02:10:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.174 02:10:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.174 02:10:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:00.174 02:10:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:00.174 02:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:00.174 02:10:59 -- paths/export.sh@5 -- # export PATH 00:41:00.174 02:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:41:00.174 02:10:59 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:41:00.174 [2024-04-24 02:11:00.018915] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:00.174 [2024-04-24 02:11:00.019114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145403 ] 00:41:00.174 [2024-04-24 02:11:00.234681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.432 [2024-04-24 02:11:00.503796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.738  Copying: 256/256 [MB] (average 992 MBps) 00:41:02.738 00:41:02.996 02:11:02 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:02.996 02:11:02 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:02.996 02:11:02 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:41:02.997 02:11:02 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:41:02.997 02:11:02 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:02.997 02:11:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:41:02.997 02:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:02.997 02:11:02 -- common/autotest_common.sh@10 -- # set +x 00:41:02.997 ************************************ 00:41:02.997 START TEST dd_inflate_file 00:41:02.997 ************************************ 00:41:02.997 02:11:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:02.997 [2024-04-24 02:11:02.969350] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:02.997 [2024-04-24 02:11:02.969566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145449 ] 00:41:03.255 [2024-04-24 02:11:03.147776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.513 [2024-04-24 02:11:03.467499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.036  Copying: 64/64 [MB] (average 984 MBps) 00:41:06.036 00:41:06.036 00:41:06.036 real 0m2.739s 00:41:06.036 user 0m2.276s 00:41:06.036 sys 0m0.328s 00:41:06.036 02:11:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:06.036 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:41:06.036 ************************************ 00:41:06.036 END TEST dd_inflate_file 00:41:06.036 ************************************ 00:41:06.036 02:11:05 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:41:06.036 02:11:05 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:41:06.036 02:11:05 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:06.036 02:11:05 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:41:06.036 02:11:05 -- dd/common.sh@31 -- # xtrace_disable 00:41:06.036 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:41:06.036 02:11:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:41:06.036 02:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:06.036 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:41:06.036 ************************************ 00:41:06.036 START TEST dd_copy_to_out_bdev 00:41:06.036 ************************************ 00:41:06.036 02:11:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:06.036 { 00:41:06.036 "subsystems": [ 00:41:06.036 { 00:41:06.036 "subsystem": "bdev", 00:41:06.036 "config": [ 00:41:06.036 { 00:41:06.036 "params": { 00:41:06.036 "block_size": 4096, 00:41:06.036 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:06.036 "name": "aio1" 00:41:06.036 }, 00:41:06.036 "method": "bdev_aio_create" 00:41:06.036 }, 00:41:06.036 { 00:41:06.036 "params": { 00:41:06.036 "trtype": "pcie", 00:41:06.036 "traddr": "0000:00:10.0", 00:41:06.036 "name": "Nvme0" 00:41:06.036 }, 00:41:06.036 "method": "bdev_nvme_attach_controller" 00:41:06.036 }, 00:41:06.036 { 00:41:06.036 "method": "bdev_wait_for_examine" 00:41:06.036 } 00:41:06.036 ] 00:41:06.036 } 00:41:06.036 ] 00:41:06.036 } 00:41:06.036 [2024-04-24 02:11:05.785990] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:06.036 [2024-04-24 02:11:05.786359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145512 ] 00:41:06.036 [2024-04-24 02:11:05.950661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.293 [2024-04-24 02:11:06.229112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.563  Copying: 64/64 [MB] (average 74 MBps) 00:41:09.563 00:41:09.563 00:41:09.563 real 0m3.541s 00:41:09.563 user 0m3.145s 00:41:09.563 sys 0m0.281s 00:41:09.563 02:11:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:09.563 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:41:09.563 ************************************ 00:41:09.563 END TEST dd_copy_to_out_bdev 00:41:09.563 ************************************ 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:41:09.563 02:11:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:09.563 02:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:09.563 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:41:09.563 ************************************ 00:41:09.563 START TEST dd_offset_magic 00:41:09.563 ************************************ 00:41:09.563 02:11:09 -- common/autotest_common.sh@1111 -- # offset_magic 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:41:09.563 02:11:09 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:09.563 02:11:09 -- dd/common.sh@31 -- # xtrace_disable 00:41:09.563 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:41:09.563 { 00:41:09.563 "subsystems": [ 00:41:09.563 { 00:41:09.563 "subsystem": "bdev", 00:41:09.563 "config": [ 00:41:09.563 { 00:41:09.563 "params": { 00:41:09.563 "block_size": 4096, 00:41:09.563 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:09.563 "name": "aio1" 00:41:09.563 }, 00:41:09.563 "method": "bdev_aio_create" 00:41:09.563 }, 00:41:09.563 { 00:41:09.563 "params": { 00:41:09.563 "trtype": "pcie", 00:41:09.563 "traddr": "0000:00:10.0", 00:41:09.563 "name": "Nvme0" 00:41:09.563 }, 00:41:09.563 "method": "bdev_nvme_attach_controller" 00:41:09.563 }, 00:41:09.563 { 00:41:09.563 "method": "bdev_wait_for_examine" 00:41:09.563 } 00:41:09.563 ] 00:41:09.563 } 00:41:09.563 ] 00:41:09.563 } 00:41:09.563 [2024-04-24 02:11:09.414883] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:09.563 [2024-04-24 02:11:09.415036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145586 ] 00:41:09.563 [2024-04-24 02:11:09.592604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.819 [2024-04-24 02:11:09.853643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.651  Copying: 65/65 [MB] (average 205 MBps) 00:41:12.651 00:41:12.651 02:11:12 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:41:12.651 02:11:12 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:12.651 02:11:12 -- dd/common.sh@31 -- # xtrace_disable 00:41:12.651 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:41:12.651 { 00:41:12.651 "subsystems": [ 00:41:12.651 { 00:41:12.651 "subsystem": "bdev", 00:41:12.651 "config": [ 00:41:12.651 { 00:41:12.651 "params": { 00:41:12.651 "block_size": 4096, 00:41:12.651 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:12.651 "name": "aio1" 00:41:12.651 }, 00:41:12.651 "method": "bdev_aio_create" 00:41:12.651 }, 00:41:12.651 { 00:41:12.651 "params": { 00:41:12.651 "trtype": "pcie", 00:41:12.651 "traddr": "0000:00:10.0", 00:41:12.651 "name": "Nvme0" 00:41:12.651 }, 00:41:12.651 "method": "bdev_nvme_attach_controller" 00:41:12.651 }, 00:41:12.651 { 00:41:12.651 "method": "bdev_wait_for_examine" 00:41:12.651 } 00:41:12.651 ] 00:41:12.651 } 00:41:12.651 ] 00:41:12.651 } 00:41:12.651 [2024-04-24 02:11:12.388831] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:12.651 [2024-04-24 02:11:12.389588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145635 ] 00:41:12.651 [2024-04-24 02:11:12.557125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.910 [2024-04-24 02:11:12.803141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.853  Copying: 1024/1024 [kB] (average 500 MBps) 00:41:14.853 00:41:15.111 02:11:14 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:15.111 02:11:14 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:15.111 02:11:14 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:15.111 02:11:14 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:41:15.111 02:11:14 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:15.111 02:11:14 -- dd/common.sh@31 -- # xtrace_disable 00:41:15.111 02:11:14 -- common/autotest_common.sh@10 -- # set +x 00:41:15.111 [2024-04-24 02:11:15.011394] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:15.111 [2024-04-24 02:11:15.011554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145676 ] 00:41:15.111 { 00:41:15.111 "subsystems": [ 00:41:15.111 { 00:41:15.111 "subsystem": "bdev", 00:41:15.111 "config": [ 00:41:15.111 { 00:41:15.111 "params": { 00:41:15.111 "block_size": 4096, 00:41:15.111 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:15.111 "name": "aio1" 00:41:15.111 }, 00:41:15.111 "method": "bdev_aio_create" 00:41:15.111 }, 00:41:15.111 { 00:41:15.111 "params": { 00:41:15.111 "trtype": "pcie", 00:41:15.111 "traddr": "0000:00:10.0", 00:41:15.111 "name": "Nvme0" 00:41:15.111 }, 00:41:15.111 "method": "bdev_nvme_attach_controller" 00:41:15.111 }, 00:41:15.111 { 00:41:15.111 "method": "bdev_wait_for_examine" 00:41:15.111 } 00:41:15.111 ] 00:41:15.111 } 00:41:15.111 ] 00:41:15.111 } 00:41:15.111 [2024-04-24 02:11:15.172996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.369 [2024-04-24 02:11:15.428962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.279  Copying: 65/65 [MB] (average 250 MBps) 00:41:18.279 00:41:18.279 02:11:17 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:41:18.279 02:11:17 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:18.279 02:11:17 -- dd/common.sh@31 -- # xtrace_disable 00:41:18.279 02:11:17 -- common/autotest_common.sh@10 -- # set +x 00:41:18.279 { 00:41:18.279 "subsystems": [ 00:41:18.279 { 00:41:18.279 "subsystem": "bdev", 00:41:18.279 "config": [ 00:41:18.279 { 00:41:18.279 "params": { 00:41:18.279 "block_size": 4096, 00:41:18.279 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:18.279 "name": "aio1" 00:41:18.279 }, 00:41:18.279 "method": "bdev_aio_create" 00:41:18.279 }, 00:41:18.279 { 00:41:18.279 "params": { 00:41:18.279 "trtype": "pcie", 00:41:18.279 "traddr": "0000:00:10.0", 00:41:18.279 "name": "Nvme0" 00:41:18.279 }, 00:41:18.279 "method": "bdev_nvme_attach_controller" 00:41:18.279 }, 00:41:18.279 { 00:41:18.279 "method": "bdev_wait_for_examine" 00:41:18.279 } 00:41:18.279 ] 00:41:18.279 } 00:41:18.279 ] 00:41:18.279 } 00:41:18.279 [2024-04-24 02:11:18.022880] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:18.279 [2024-04-24 02:11:18.023630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145710 ] 00:41:18.279 [2024-04-24 02:11:18.186336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.536 [2024-04-24 02:11:18.451139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.000  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:21.000 00:41:21.000 02:11:20 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:21.000 02:11:20 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:21.000 00:41:21.000 real 0m11.321s 00:41:21.000 user 0m9.436s 00:41:21.000 sys 0m0.972s 00:41:21.000 02:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:21.000 02:11:20 -- common/autotest_common.sh@10 -- # set +x 00:41:21.000 ************************************ 00:41:21.000 END TEST dd_offset_magic 00:41:21.000 ************************************ 00:41:21.000 02:11:20 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:41:21.000 02:11:20 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:41:21.000 02:11:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:21.000 02:11:20 -- dd/common.sh@11 -- # local nvme_ref= 00:41:21.000 02:11:20 -- dd/common.sh@12 -- # local size=4194330 00:41:21.000 02:11:20 -- dd/common.sh@14 -- # local bs=1048576 00:41:21.000 02:11:20 -- dd/common.sh@15 -- # local count=5 00:41:21.000 02:11:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:41:21.000 02:11:20 -- dd/common.sh@18 -- # gen_conf 00:41:21.000 02:11:20 -- dd/common.sh@31 -- # xtrace_disable 00:41:21.000 02:11:20 -- common/autotest_common.sh@10 -- # set +x 00:41:21.000 { 00:41:21.000 "subsystems": [ 00:41:21.000 { 00:41:21.000 "subsystem": "bdev", 00:41:21.000 "config": [ 00:41:21.000 { 00:41:21.000 "params": { 00:41:21.000 "block_size": 4096, 00:41:21.000 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:21.000 "name": "aio1" 00:41:21.000 }, 00:41:21.000 "method": "bdev_aio_create" 00:41:21.000 }, 00:41:21.000 { 00:41:21.000 "params": { 00:41:21.000 "trtype": "pcie", 00:41:21.000 "traddr": "0000:00:10.0", 00:41:21.000 "name": "Nvme0" 00:41:21.000 }, 00:41:21.000 "method": "bdev_nvme_attach_controller" 00:41:21.000 }, 00:41:21.000 { 00:41:21.000 "method": "bdev_wait_for_examine" 00:41:21.001 } 00:41:21.001 ] 00:41:21.001 } 00:41:21.001 ] 00:41:21.001 } 00:41:21.001 [2024-04-24 02:11:20.782728] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:21.001 [2024-04-24 02:11:20.782894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145772 ] 00:41:21.001 [2024-04-24 02:11:20.955020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.259 [2024-04-24 02:11:21.208542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.736  Copying: 5120/5120 [kB] (average 1666 MBps) 00:41:23.736 00:41:23.737 02:11:23 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:41:23.737 02:11:23 -- dd/common.sh@10 -- # local bdev=aio1 00:41:23.737 02:11:23 -- dd/common.sh@11 -- # local nvme_ref= 00:41:23.737 02:11:23 -- dd/common.sh@12 -- # local size=4194330 00:41:23.737 02:11:23 -- dd/common.sh@14 -- # local bs=1048576 00:41:23.737 02:11:23 -- dd/common.sh@15 -- # local count=5 00:41:23.737 02:11:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:41:23.737 02:11:23 -- dd/common.sh@18 -- # gen_conf 00:41:23.737 02:11:23 -- dd/common.sh@31 -- # xtrace_disable 00:41:23.737 02:11:23 -- common/autotest_common.sh@10 -- # set +x 00:41:23.737 { 00:41:23.737 "subsystems": [ 00:41:23.737 { 00:41:23.737 "subsystem": "bdev", 00:41:23.737 "config": [ 00:41:23.737 { 00:41:23.737 "params": { 00:41:23.737 "block_size": 4096, 00:41:23.737 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:23.737 "name": "aio1" 00:41:23.737 }, 00:41:23.737 "method": "bdev_aio_create" 00:41:23.737 }, 00:41:23.737 { 00:41:23.737 "params": { 00:41:23.737 "trtype": "pcie", 00:41:23.737 "traddr": "0000:00:10.0", 00:41:23.737 "name": "Nvme0" 00:41:23.737 }, 00:41:23.737 "method": "bdev_nvme_attach_controller" 00:41:23.737 }, 00:41:23.737 { 00:41:23.737 "method": "bdev_wait_for_examine" 00:41:23.737 } 00:41:23.737 ] 00:41:23.737 } 00:41:23.737 ] 00:41:23.737 } 00:41:23.737 [2024-04-24 02:11:23.487744] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:23.737 [2024-04-24 02:11:23.487905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145813 ] 00:41:23.737 [2024-04-24 02:11:23.653636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.994 [2024-04-24 02:11:23.899364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.458  Copying: 5120/5120 [kB] (average 333 MBps) 00:41:26.458 00:41:26.458 02:11:26 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:26.458 00:41:26.458 real 0m26.340s 00:41:26.458 user 0m21.954s 00:41:26.458 sys 0m2.819s 00:41:26.458 02:11:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:26.458 02:11:26 -- common/autotest_common.sh@10 -- # set +x 00:41:26.458 ************************************ 00:41:26.458 END TEST spdk_dd_bdev_to_bdev 00:41:26.458 ************************************ 00:41:26.458 02:11:26 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:41:26.458 02:11:26 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:26.458 02:11:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:26.458 02:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:26.458 02:11:26 -- common/autotest_common.sh@10 -- # set +x 00:41:26.458 ************************************ 00:41:26.458 START TEST spdk_dd_sparse 00:41:26.458 ************************************ 00:41:26.458 02:11:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:26.458 * Looking for test storage... 00:41:26.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:26.458 02:11:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:26.458 02:11:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:26.458 02:11:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:26.458 02:11:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:26.458 02:11:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:26.458 02:11:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:26.458 02:11:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:26.458 02:11:26 -- paths/export.sh@5 -- # export PATH 00:41:26.458 02:11:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:26.458 02:11:26 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:41:26.458 02:11:26 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:41:26.458 02:11:26 -- dd/sparse.sh@110 -- # file1=file_zero1 00:41:26.458 02:11:26 -- dd/sparse.sh@111 -- # file2=file_zero2 00:41:26.458 02:11:26 -- dd/sparse.sh@112 -- # file3=file_zero3 00:41:26.458 02:11:26 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:41:26.458 02:11:26 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:41:26.458 02:11:26 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:41:26.458 02:11:26 -- dd/sparse.sh@118 -- # prepare 00:41:26.458 02:11:26 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:41:26.458 02:11:26 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:41:26.458 1+0 records in 00:41:26.458 1+0 records out 00:41:26.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0122679 s, 342 MB/s 00:41:26.458 02:11:26 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:41:26.458 1+0 records in 00:41:26.458 1+0 records out 00:41:26.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00820532 s, 511 MB/s 00:41:26.458 02:11:26 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:41:26.458 1+0 records in 00:41:26.458 1+0 records out 00:41:26.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00892943 s, 470 MB/s 00:41:26.458 02:11:26 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:41:26.458 02:11:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:26.458 02:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:26.458 02:11:26 -- common/autotest_common.sh@10 -- # set +x 00:41:26.458 ************************************ 00:41:26.458 START TEST dd_sparse_file_to_file 00:41:26.458 ************************************ 00:41:26.458 02:11:26 -- common/autotest_common.sh@1111 -- # file_to_file 00:41:26.458 02:11:26 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:41:26.458 02:11:26 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:41:26.458 02:11:26 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:26.458 02:11:26 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:41:26.458 02:11:26 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:41:26.458 02:11:26 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:41:26.458 02:11:26 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:41:26.458 02:11:26 -- dd/sparse.sh@41 -- # gen_conf 00:41:26.458 02:11:26 -- dd/common.sh@31 -- # xtrace_disable 00:41:26.458 02:11:26 -- common/autotest_common.sh@10 -- # set +x 00:41:26.458 { 00:41:26.458 "subsystems": [ 00:41:26.458 { 00:41:26.458 "subsystem": "bdev", 00:41:26.458 "config": [ 00:41:26.458 { 00:41:26.458 "params": { 00:41:26.458 "block_size": 4096, 00:41:26.458 "filename": "dd_sparse_aio_disk", 00:41:26.458 "name": "dd_aio" 00:41:26.458 }, 00:41:26.458 "method": "bdev_aio_create" 00:41:26.458 }, 00:41:26.458 { 00:41:26.458 "params": { 00:41:26.458 "lvs_name": "dd_lvstore", 00:41:26.458 "bdev_name": "dd_aio" 00:41:26.458 }, 00:41:26.458 "method": "bdev_lvol_create_lvstore" 00:41:26.458 }, 00:41:26.458 { 00:41:26.458 "method": "bdev_wait_for_examine" 00:41:26.458 } 00:41:26.458 ] 00:41:26.458 } 00:41:26.458 ] 00:41:26.458 } 00:41:26.458 [2024-04-24 02:11:26.501792] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:26.458 [2024-04-24 02:11:26.501962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145918 ] 00:41:26.785 [2024-04-24 02:11:26.667567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.046 [2024-04-24 02:11:26.922870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.512  Copying: 12/36 [MB] (average 800 MBps) 00:41:29.512 00:41:29.512 02:11:29 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:41:29.512 02:11:29 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:41:29.512 02:11:29 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:41:29.512 02:11:29 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:41:29.512 02:11:29 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:29.512 02:11:29 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:41:29.512 02:11:29 -- dd/sparse.sh@52 -- # stat1_b=24576 00:41:29.512 02:11:29 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:41:29.512 02:11:29 -- dd/sparse.sh@53 -- # stat2_b=24576 00:41:29.512 02:11:29 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:29.512 00:41:29.512 real 0m2.838s 00:41:29.512 user 0m2.415s 00:41:29.512 sys 0m0.276s 00:41:29.512 02:11:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:29.512 02:11:29 -- common/autotest_common.sh@10 -- # set +x 00:41:29.512 ************************************ 00:41:29.512 END TEST dd_sparse_file_to_file 00:41:29.512 ************************************ 00:41:29.512 02:11:29 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:41:29.512 02:11:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:29.512 02:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:29.512 02:11:29 -- common/autotest_common.sh@10 -- # set +x 00:41:29.512 ************************************ 00:41:29.512 START TEST dd_sparse_file_to_bdev 00:41:29.512 ************************************ 00:41:29.512 02:11:29 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:41:29.512 02:11:29 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:29.512 02:11:29 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:41:29.512 02:11:29 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:41:29.512 02:11:29 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:41:29.512 02:11:29 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:41:29.512 02:11:29 -- dd/sparse.sh@73 -- # gen_conf 00:41:29.512 02:11:29 -- dd/common.sh@31 -- # xtrace_disable 00:41:29.512 02:11:29 -- common/autotest_common.sh@10 -- # set +x 00:41:29.512 { 00:41:29.512 "subsystems": [ 00:41:29.512 { 00:41:29.512 "subsystem": "bdev", 00:41:29.512 "config": [ 00:41:29.512 { 00:41:29.512 "params": { 00:41:29.512 "block_size": 4096, 00:41:29.512 "filename": "dd_sparse_aio_disk", 00:41:29.512 "name": "dd_aio" 00:41:29.512 }, 00:41:29.512 "method": "bdev_aio_create" 00:41:29.512 }, 00:41:29.512 { 00:41:29.512 "params": { 00:41:29.512 "lvs_name": "dd_lvstore", 00:41:29.512 "lvol_name": "dd_lvol", 00:41:29.512 "size": 37748736, 00:41:29.512 "thin_provision": true 00:41:29.512 }, 00:41:29.512 "method": "bdev_lvol_create" 00:41:29.512 }, 00:41:29.512 { 00:41:29.512 "method": "bdev_wait_for_examine" 00:41:29.512 } 00:41:29.512 ] 00:41:29.512 } 00:41:29.512 ] 00:41:29.512 } 00:41:29.512 [2024-04-24 02:11:29.423070] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:29.512 [2024-04-24 02:11:29.423241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145994 ] 00:41:29.512 [2024-04-24 02:11:29.586093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.833 [2024-04-24 02:11:29.836759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.399 [2024-04-24 02:11:30.285735] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:41:30.399  Copying: 12/36 [MB] (average 600 MBps)[2024-04-24 02:11:30.356015] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:41:32.301 00:41:32.301 00:41:32.301 00:41:32.301 real 0m2.701s 00:41:32.301 user 0m2.333s 00:41:32.301 sys 0m0.265s 00:41:32.301 02:11:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:32.301 02:11:32 -- common/autotest_common.sh@10 -- # set +x 00:41:32.301 ************************************ 00:41:32.301 END TEST dd_sparse_file_to_bdev 00:41:32.301 ************************************ 00:41:32.301 02:11:32 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:41:32.301 02:11:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:32.301 02:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:32.301 02:11:32 -- common/autotest_common.sh@10 -- # set +x 00:41:32.301 ************************************ 00:41:32.301 START TEST dd_sparse_bdev_to_file 00:41:32.301 ************************************ 00:41:32.301 02:11:32 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:41:32.301 02:11:32 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:41:32.301 02:11:32 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:41:32.301 02:11:32 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:32.301 02:11:32 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:41:32.301 02:11:32 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:41:32.301 02:11:32 -- dd/sparse.sh@91 -- # gen_conf 00:41:32.301 02:11:32 -- dd/common.sh@31 -- # xtrace_disable 00:41:32.301 02:11:32 -- common/autotest_common.sh@10 -- # set +x 00:41:32.301 { 00:41:32.301 "subsystems": [ 00:41:32.301 { 00:41:32.301 "subsystem": "bdev", 00:41:32.301 "config": [ 00:41:32.301 { 00:41:32.301 "params": { 00:41:32.301 "block_size": 4096, 00:41:32.301 "filename": "dd_sparse_aio_disk", 00:41:32.301 "name": "dd_aio" 00:41:32.301 }, 00:41:32.301 "method": "bdev_aio_create" 00:41:32.301 }, 00:41:32.301 { 00:41:32.301 "method": "bdev_wait_for_examine" 00:41:32.301 } 00:41:32.301 ] 00:41:32.301 } 00:41:32.301 ] 00:41:32.301 } 00:41:32.301 [2024-04-24 02:11:32.244290] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:32.301 [2024-04-24 02:11:32.244569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146058 ] 00:41:32.559 [2024-04-24 02:11:32.424918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.860 [2024-04-24 02:11:32.746044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.382  Copying: 12/36 [MB] (average 857 MBps) 00:41:35.382 00:41:35.382 02:11:35 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:41:35.382 02:11:35 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:41:35.382 02:11:35 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:41:35.382 02:11:35 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:41:35.382 02:11:35 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:35.382 02:11:35 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:41:35.382 02:11:35 -- dd/sparse.sh@102 -- # stat2_b=24576 00:41:35.382 02:11:35 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:41:35.382 02:11:35 -- dd/sparse.sh@103 -- # stat3_b=24576 00:41:35.382 02:11:35 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:35.382 00:41:35.382 real 0m2.878s 00:41:35.382 user 0m2.490s 00:41:35.382 sys 0m0.284s 00:41:35.382 02:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:35.382 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.382 ************************************ 00:41:35.382 END TEST dd_sparse_bdev_to_file 00:41:35.382 ************************************ 00:41:35.382 02:11:35 -- dd/sparse.sh@1 -- # cleanup 00:41:35.382 02:11:35 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:41:35.382 02:11:35 -- dd/sparse.sh@12 -- # rm file_zero1 00:41:35.382 02:11:35 -- dd/sparse.sh@13 -- # rm file_zero2 00:41:35.382 02:11:35 -- dd/sparse.sh@14 -- # rm file_zero3 00:41:35.382 00:41:35.382 real 0m8.862s 00:41:35.382 user 0m7.449s 00:41:35.382 sys 0m1.070s 00:41:35.382 02:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:35.382 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.382 ************************************ 00:41:35.382 END TEST spdk_dd_sparse 00:41:35.382 ************************************ 00:41:35.382 02:11:35 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:35.382 02:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:35.382 02:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:35.382 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.382 ************************************ 00:41:35.382 START TEST spdk_dd_negative 00:41:35.382 ************************************ 00:41:35.382 02:11:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:35.382 * Looking for test storage... 00:41:35.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:35.382 02:11:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:35.382 02:11:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:35.382 02:11:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:35.382 02:11:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:35.382 02:11:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:35.382 02:11:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:35.382 02:11:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:35.382 02:11:35 -- paths/export.sh@5 -- # export PATH 00:41:35.382 02:11:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:35.382 02:11:35 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:35.382 02:11:35 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:35.382 02:11:35 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:35.382 02:11:35 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:35.382 02:11:35 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:41:35.382 02:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:35.382 02:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:35.382 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.382 ************************************ 00:41:35.382 START TEST dd_invalid_arguments 00:41:35.382 ************************************ 00:41:35.382 02:11:35 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:41:35.382 02:11:35 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:35.382 02:11:35 -- common/autotest_common.sh@638 -- # local es=0 00:41:35.382 02:11:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:35.382 02:11:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.382 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.382 02:11:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.382 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.382 02:11:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.382 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.382 02:11:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.382 02:11:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:35.382 02:11:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:35.642 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:41:35.642 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:41:35.642 00:41:35.642 CPU options: 00:41:35.642 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:41:35.642 (like [0,1,10]) 00:41:35.642 --lcores lcore to CPU mapping list. The list is in the format: 00:41:35.642 [<,lcores[@CPUs]>...] 00:41:35.642 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:41:35.642 Within the group, '-' is used for range separator, 00:41:35.642 ',' is used for single number separator. 00:41:35.642 '( )' can be omitted for single element group, 00:41:35.642 '@' can be omitted if cpus and lcores have the same value 00:41:35.642 --disable-cpumask-locks Disable CPU core lock files. 00:41:35.642 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:41:35.642 pollers in the app support interrupt mode) 00:41:35.642 -p, --main-core main (primary) core for DPDK 00:41:35.642 00:41:35.642 Configuration options: 00:41:35.642 -c, --config, --json JSON config file 00:41:35.642 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:41:35.642 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:41:35.642 --wait-for-rpc wait for RPCs to initialize subsystems 00:41:35.642 --rpcs-allowed comma-separated list of permitted RPCS 00:41:35.642 --json-ignore-init-errors don't exit on invalid config entry 00:41:35.642 00:41:35.642 Memory options: 00:41:35.642 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:41:35.642 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:41:35.642 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:41:35.642 -R, --huge-unlink unlink huge files after initialization 00:41:35.642 -n, --mem-channels number of memory channels used for DPDK 00:41:35.642 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:41:35.642 --msg-mempool-size global message memory pool size in count (default: 262143) 00:41:35.642 --no-huge run without using hugepages 00:41:35.642 -i, --shm-id shared memory ID (optional) 00:41:35.642 -g, --single-file-segments force creating just one hugetlbfs file 00:41:35.642 00:41:35.642 PCI options: 00:41:35.642 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:41:35.642 -B, --pci-blocked pci addr to block (can be used more than once) 00:41:35.642 -u, --no-pci disable PCI access 00:41:35.642 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:41:35.642 00:41:35.642 Log options: 00:41:35.642 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:41:35.642 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:41:35.642 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:41:35.642 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:41:35.642 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:41:35.642 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:41:35.642 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:41:35.642 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:41:35.642 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:41:35.642 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:41:35.642 virtio_vfio_user, vmd) 00:41:35.642 --silence-noticelog disable notice level logging to stderr 00:41:35.642 00:41:35.642 Trace options: 00:41:35.642 --num-trace-entries number of trace entries for each core, must be power of 2, 00:41:35.642 setting 0 to disable trace (default 32768) 00:41:35.642 Tracepoints vary in size and can use more than one trace entry. 00:41:35.642 -e, --tpoint-group [:] 00:41:35.642 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:41:35.642 [2024-04-24 02:11:35.466225] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:41:35.642 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:41:35.642 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:41:35.642 a tracepoint group. First tpoint inside a group can be enabled by 00:41:35.642 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:41:35.642 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:41:35.642 in /include/spdk_internal/trace_defs.h 00:41:35.642 00:41:35.642 Other options: 00:41:35.642 -h, --help show this usage 00:41:35.642 -v, --version print SPDK version 00:41:35.642 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:41:35.642 --env-context Opaque context for use of the env implementation 00:41:35.642 00:41:35.642 Application specific: 00:41:35.642 [--------- DD Options ---------] 00:41:35.642 --if Input file. Must specify either --if or --ib. 00:41:35.642 --ib Input bdev. Must specifier either --if or --ib 00:41:35.642 --of Output file. Must specify either --of or --ob. 00:41:35.642 --ob Output bdev. Must specify either --of or --ob. 00:41:35.642 --iflag Input file flags. 00:41:35.642 --oflag Output file flags. 00:41:35.642 --bs I/O unit size (default: 4096) 00:41:35.642 --qd Queue depth (default: 2) 00:41:35.642 --count I/O unit count. The number of I/O units to copy. (default: all) 00:41:35.642 --skip Skip this many I/O units at start of input. (default: 0) 00:41:35.642 --seek Skip this many I/O units at start of output. (default: 0) 00:41:35.642 --aio Force usage of AIO. (by default io_uring is used if available) 00:41:35.642 --sparse Enable hole skipping in input target 00:41:35.642 Available iflag and oflag values: 00:41:35.642 append - append mode 00:41:35.642 direct - use direct I/O for data 00:41:35.642 directory - fail unless a directory 00:41:35.642 dsync - use synchronized I/O for data 00:41:35.642 noatime - do not update access time 00:41:35.642 noctty - do not assign controlling terminal from file 00:41:35.642 nofollow - do not follow symlinks 00:41:35.642 nonblock - use non-blocking I/O 00:41:35.642 sync - use synchronized I/O for data and metadata 00:41:35.642 02:11:35 -- common/autotest_common.sh@641 -- # es=2 00:41:35.642 02:11:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:35.642 02:11:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:35.642 02:11:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:35.642 00:41:35.642 real 0m0.141s 00:41:35.642 user 0m0.080s 00:41:35.642 sys 0m0.062s 00:41:35.642 02:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:35.642 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.642 ************************************ 00:41:35.642 END TEST dd_invalid_arguments 00:41:35.642 ************************************ 00:41:35.642 02:11:35 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:41:35.642 02:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:35.642 02:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:35.642 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.642 ************************************ 00:41:35.642 START TEST dd_double_input 00:41:35.642 ************************************ 00:41:35.642 02:11:35 -- common/autotest_common.sh@1111 -- # double_input 00:41:35.642 02:11:35 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:35.642 02:11:35 -- common/autotest_common.sh@638 -- # local es=0 00:41:35.642 02:11:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:35.642 02:11:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.642 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.642 02:11:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.642 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.642 02:11:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.642 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.642 02:11:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.642 02:11:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:35.643 02:11:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:35.643 [2024-04-24 02:11:35.708092] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:41:35.962 02:11:35 -- common/autotest_common.sh@641 -- # es=22 00:41:35.962 02:11:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:35.962 02:11:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:35.962 02:11:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:35.962 00:41:35.962 real 0m0.142s 00:41:35.962 user 0m0.089s 00:41:35.962 sys 0m0.051s 00:41:35.962 02:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:35.962 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.962 ************************************ 00:41:35.962 END TEST dd_double_input 00:41:35.962 ************************************ 00:41:35.962 02:11:35 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:41:35.962 02:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:35.962 02:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:35.962 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.962 ************************************ 00:41:35.962 START TEST dd_double_output 00:41:35.962 ************************************ 00:41:35.962 02:11:35 -- common/autotest_common.sh@1111 -- # double_output 00:41:35.962 02:11:35 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:35.962 02:11:35 -- common/autotest_common.sh@638 -- # local es=0 00:41:35.962 02:11:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:35.962 02:11:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.962 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.962 02:11:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.962 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.962 02:11:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.962 02:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:35.962 02:11:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:35.962 02:11:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:35.962 02:11:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:35.962 [2024-04-24 02:11:35.933058] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:41:35.962 02:11:35 -- common/autotest_common.sh@641 -- # es=22 00:41:35.962 02:11:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:35.962 02:11:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:35.962 02:11:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:35.962 00:41:35.962 real 0m0.111s 00:41:35.962 user 0m0.064s 00:41:35.962 sys 0m0.047s 00:41:35.962 02:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:35.962 ************************************ 00:41:35.962 END TEST dd_double_output 00:41:35.962 ************************************ 00:41:35.962 02:11:35 -- common/autotest_common.sh@10 -- # set +x 00:41:35.962 02:11:36 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:41:35.962 02:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:35.962 02:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:35.962 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.220 ************************************ 00:41:36.220 START TEST dd_no_input 00:41:36.220 ************************************ 00:41:36.220 02:11:36 -- common/autotest_common.sh@1111 -- # no_input 00:41:36.220 02:11:36 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:36.220 02:11:36 -- common/autotest_common.sh@638 -- # local es=0 00:41:36.220 02:11:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:36.220 02:11:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.220 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.220 02:11:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.220 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.220 02:11:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.220 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.220 02:11:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.220 02:11:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:36.220 02:11:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:36.220 [2024-04-24 02:11:36.162164] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:41:36.220 02:11:36 -- common/autotest_common.sh@641 -- # es=22 00:41:36.220 02:11:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:36.220 02:11:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:36.220 02:11:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:36.220 00:41:36.220 real 0m0.142s 00:41:36.220 user 0m0.064s 00:41:36.220 sys 0m0.079s 00:41:36.220 02:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:36.220 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.220 ************************************ 00:41:36.220 END TEST dd_no_input 00:41:36.220 ************************************ 00:41:36.220 02:11:36 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:41:36.220 02:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:36.220 02:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:36.220 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.479 ************************************ 00:41:36.479 START TEST dd_no_output 00:41:36.479 ************************************ 00:41:36.479 02:11:36 -- common/autotest_common.sh@1111 -- # no_output 00:41:36.479 02:11:36 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.479 02:11:36 -- common/autotest_common.sh@638 -- # local es=0 00:41:36.479 02:11:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.479 02:11:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.479 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.479 02:11:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.479 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.479 02:11:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.479 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.479 02:11:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.479 02:11:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:36.479 02:11:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.479 [2024-04-24 02:11:36.397827] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:41:36.479 02:11:36 -- common/autotest_common.sh@641 -- # es=22 00:41:36.479 02:11:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:36.479 02:11:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:36.479 02:11:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:36.479 00:41:36.479 real 0m0.146s 00:41:36.479 user 0m0.059s 00:41:36.479 sys 0m0.088s 00:41:36.479 02:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:36.479 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.479 ************************************ 00:41:36.479 END TEST dd_no_output 00:41:36.479 ************************************ 00:41:36.479 02:11:36 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:41:36.479 02:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:36.479 02:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:36.479 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.738 ************************************ 00:41:36.738 START TEST dd_wrong_blocksize 00:41:36.738 ************************************ 00:41:36.738 02:11:36 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:41:36.738 02:11:36 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:36.738 02:11:36 -- common/autotest_common.sh@638 -- # local es=0 00:41:36.738 02:11:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:36.738 02:11:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:36.738 02:11:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:36.738 [2024-04-24 02:11:36.648970] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:41:36.738 02:11:36 -- common/autotest_common.sh@641 -- # es=22 00:41:36.738 02:11:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:36.738 02:11:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:36.738 02:11:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:36.738 00:41:36.738 real 0m0.143s 00:41:36.738 user 0m0.068s 00:41:36.738 sys 0m0.075s 00:41:36.738 02:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:36.738 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.738 ************************************ 00:41:36.738 END TEST dd_wrong_blocksize 00:41:36.738 ************************************ 00:41:36.738 02:11:36 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:41:36.738 02:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:36.738 02:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:36.738 02:11:36 -- common/autotest_common.sh@10 -- # set +x 00:41:36.738 ************************************ 00:41:36.738 START TEST dd_smaller_blocksize 00:41:36.738 ************************************ 00:41:36.738 02:11:36 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:41:36.738 02:11:36 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:36.738 02:11:36 -- common/autotest_common.sh@638 -- # local es=0 00:41:36.738 02:11:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:36.738 02:11:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.738 02:11:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:36.738 02:11:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:36.996 [2024-04-24 02:11:36.893945] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:36.996 [2024-04-24 02:11:36.894138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146387 ] 00:41:36.996 [2024-04-24 02:11:37.076337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.562 [2024-04-24 02:11:37.401330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.127 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:41:38.385 [2024-04-24 02:11:38.284470] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:41:38.385 [2024-04-24 02:11:38.284580] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:39.359 [2024-04-24 02:11:39.329080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:39.925 02:11:39 -- common/autotest_common.sh@641 -- # es=244 00:41:39.925 02:11:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:39.925 02:11:39 -- common/autotest_common.sh@650 -- # es=116 00:41:39.925 02:11:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:41:39.925 02:11:39 -- common/autotest_common.sh@658 -- # es=1 00:41:39.925 02:11:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:39.925 00:41:39.925 real 0m3.052s 00:41:39.925 user 0m2.382s 00:41:39.925 sys 0m0.571s 00:41:39.925 02:11:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:39.925 02:11:39 -- common/autotest_common.sh@10 -- # set +x 00:41:39.925 ************************************ 00:41:39.925 END TEST dd_smaller_blocksize 00:41:39.925 ************************************ 00:41:39.925 02:11:39 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:41:39.925 02:11:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:39.925 02:11:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:39.925 02:11:39 -- common/autotest_common.sh@10 -- # set +x 00:41:39.925 ************************************ 00:41:39.925 START TEST dd_invalid_count 00:41:39.925 ************************************ 00:41:39.925 02:11:39 -- common/autotest_common.sh@1111 -- # invalid_count 00:41:39.925 02:11:39 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:39.925 02:11:39 -- common/autotest_common.sh@638 -- # local es=0 00:41:39.925 02:11:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:39.925 02:11:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:39.925 02:11:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:39.925 02:11:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:39.925 02:11:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:39.925 02:11:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:39.925 02:11:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:39.925 02:11:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:39.925 02:11:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:39.925 02:11:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:40.183 [2024-04-24 02:11:40.022988] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:41:40.183 02:11:40 -- common/autotest_common.sh@641 -- # es=22 00:41:40.183 02:11:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:40.183 02:11:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:40.183 02:11:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:40.183 00:41:40.183 real 0m0.135s 00:41:40.183 user 0m0.073s 00:41:40.183 sys 0m0.062s 00:41:40.183 02:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:40.183 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.183 ************************************ 00:41:40.183 END TEST dd_invalid_count 00:41:40.183 ************************************ 00:41:40.183 02:11:40 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:41:40.183 02:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:40.183 02:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:40.183 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.183 ************************************ 00:41:40.183 START TEST dd_invalid_oflag 00:41:40.183 ************************************ 00:41:40.183 02:11:40 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:41:40.183 02:11:40 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:40.183 02:11:40 -- common/autotest_common.sh@638 -- # local es=0 00:41:40.183 02:11:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:40.183 02:11:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.183 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.183 02:11:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.183 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.183 02:11:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.183 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.183 02:11:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.183 02:11:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:40.183 02:11:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:40.183 [2024-04-24 02:11:40.229607] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:41:40.443 02:11:40 -- common/autotest_common.sh@641 -- # es=22 00:41:40.443 02:11:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:40.443 02:11:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:40.443 02:11:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:40.443 00:41:40.443 real 0m0.126s 00:41:40.443 user 0m0.079s 00:41:40.443 sys 0m0.046s 00:41:40.443 02:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:40.443 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.443 ************************************ 00:41:40.443 END TEST dd_invalid_oflag 00:41:40.443 ************************************ 00:41:40.443 02:11:40 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:41:40.443 02:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:40.443 02:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:40.443 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.443 ************************************ 00:41:40.443 START TEST dd_invalid_iflag 00:41:40.443 ************************************ 00:41:40.443 02:11:40 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:41:40.443 02:11:40 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:40.443 02:11:40 -- common/autotest_common.sh@638 -- # local es=0 00:41:40.443 02:11:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:40.443 02:11:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.443 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.443 02:11:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.443 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.443 02:11:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.443 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.443 02:11:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.443 02:11:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:40.443 02:11:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:40.443 [2024-04-24 02:11:40.439157] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:41:40.443 02:11:40 -- common/autotest_common.sh@641 -- # es=22 00:41:40.443 02:11:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:40.443 02:11:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:40.443 02:11:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:40.443 00:41:40.443 real 0m0.136s 00:41:40.443 user 0m0.077s 00:41:40.443 sys 0m0.060s 00:41:40.443 02:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:40.444 ************************************ 00:41:40.444 END TEST dd_invalid_iflag 00:41:40.444 ************************************ 00:41:40.444 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.702 02:11:40 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:41:40.702 02:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:40.702 02:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:40.702 02:11:40 -- common/autotest_common.sh@10 -- # set +x 00:41:40.702 ************************************ 00:41:40.702 START TEST dd_unknown_flag 00:41:40.702 ************************************ 00:41:40.702 02:11:40 -- common/autotest_common.sh@1111 -- # unknown_flag 00:41:40.702 02:11:40 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:40.702 02:11:40 -- common/autotest_common.sh@638 -- # local es=0 00:41:40.702 02:11:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:40.702 02:11:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.702 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.702 02:11:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.702 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.702 02:11:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.702 02:11:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:40.702 02:11:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.702 02:11:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:40.702 02:11:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:40.702 [2024-04-24 02:11:40.648630] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:40.702 [2024-04-24 02:11:40.648810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146539 ] 00:41:40.960 [2024-04-24 02:11:40.812138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.218 [2024-04-24 02:11:41.075562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.488 [2024-04-24 02:11:41.500289] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:41:41.488 [2024-04-24 02:11:41.500396] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:41.488  Copying: 0/0 [B] (average 0 Bps)[2024-04-24 02:11:41.500582] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:41:42.463 [2024-04-24 02:11:42.526504] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:43.029 00:41:43.029 00:41:43.029 02:11:43 -- common/autotest_common.sh@641 -- # es=234 00:41:43.029 02:11:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:43.029 02:11:43 -- common/autotest_common.sh@650 -- # es=106 00:41:43.029 02:11:43 -- common/autotest_common.sh@651 -- # case "$es" in 00:41:43.029 02:11:43 -- common/autotest_common.sh@658 -- # es=1 00:41:43.029 02:11:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:43.029 00:41:43.029 real 0m2.513s 00:41:43.029 user 0m2.184s 00:41:43.029 sys 0m0.207s 00:41:43.029 02:11:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:43.029 ************************************ 00:41:43.029 END TEST dd_unknown_flag 00:41:43.029 ************************************ 00:41:43.029 02:11:43 -- common/autotest_common.sh@10 -- # set +x 00:41:43.287 02:11:43 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:41:43.287 02:11:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:43.287 02:11:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:43.287 02:11:43 -- common/autotest_common.sh@10 -- # set +x 00:41:43.287 ************************************ 00:41:43.287 START TEST dd_invalid_json 00:41:43.287 ************************************ 00:41:43.287 02:11:43 -- common/autotest_common.sh@1111 -- # invalid_json 00:41:43.287 02:11:43 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:43.287 02:11:43 -- common/autotest_common.sh@638 -- # local es=0 00:41:43.287 02:11:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:43.287 02:11:43 -- dd/negative_dd.sh@95 -- # : 00:41:43.287 02:11:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:43.287 02:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:43.287 02:11:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:43.287 02:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:43.287 02:11:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:43.287 02:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:43.287 02:11:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:43.287 02:11:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:43.287 02:11:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:43.287 [2024-04-24 02:11:43.268924] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:43.287 [2024-04-24 02:11:43.269174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146598 ] 00:41:43.546 [2024-04-24 02:11:43.446742] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.805 [2024-04-24 02:11:43.777268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.805 [2024-04-24 02:11:43.777410] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:41:43.805 [2024-04-24 02:11:43.777456] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:43.805 [2024-04-24 02:11:43.777492] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:43.805 [2024-04-24 02:11:43.777608] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:44.370 02:11:44 -- common/autotest_common.sh@641 -- # es=234 00:41:44.370 02:11:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:44.370 02:11:44 -- common/autotest_common.sh@650 -- # es=106 00:41:44.370 02:11:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:41:44.370 02:11:44 -- common/autotest_common.sh@658 -- # es=1 00:41:44.370 02:11:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:44.370 00:41:44.370 real 0m1.138s 00:41:44.370 user 0m0.910s 00:41:44.370 sys 0m0.129s 00:41:44.370 02:11:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:44.370 ************************************ 00:41:44.370 END TEST dd_invalid_json 00:41:44.370 ************************************ 00:41:44.370 02:11:44 -- common/autotest_common.sh@10 -- # set +x 00:41:44.370 00:41:44.370 real 0m9.157s 00:41:44.370 user 0m6.705s 00:41:44.370 sys 0m2.142s 00:41:44.370 02:11:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:44.370 02:11:44 -- common/autotest_common.sh@10 -- # set +x 00:41:44.370 ************************************ 00:41:44.370 END TEST spdk_dd_negative 00:41:44.370 ************************************ 00:41:44.370 00:41:44.370 real 3m24.937s 00:41:44.370 user 2m51.577s 00:41:44.370 sys 0m23.409s 00:41:44.370 02:11:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:44.370 02:11:44 -- common/autotest_common.sh@10 -- # set +x 00:41:44.370 ************************************ 00:41:44.370 END TEST spdk_dd 00:41:44.370 ************************************ 00:41:44.370 02:11:44 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:41:44.370 02:11:44 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:41:44.370 02:11:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:41:44.628 02:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:44.628 02:11:44 -- common/autotest_common.sh@10 -- # set +x 00:41:44.628 ************************************ 00:41:44.628 START TEST blockdev_nvme 00:41:44.628 ************************************ 00:41:44.628 02:11:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:41:44.628 * Looking for test storage... 00:41:44.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:41:44.628 02:11:44 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:41:44.628 02:11:44 -- bdev/nbd_common.sh@6 -- # set -e 00:41:44.628 02:11:44 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:41:44.628 02:11:44 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:44.628 02:11:44 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:41:44.628 02:11:44 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:41:44.628 02:11:44 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:41:44.628 02:11:44 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:41:44.628 02:11:44 -- bdev/blockdev.sh@20 -- # : 00:41:44.628 02:11:44 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:41:44.628 02:11:44 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:41:44.628 02:11:44 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:41:44.628 02:11:44 -- bdev/blockdev.sh@674 -- # uname -s 00:41:44.628 02:11:44 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:41:44.628 02:11:44 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:41:44.628 02:11:44 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:41:44.628 02:11:44 -- bdev/blockdev.sh@683 -- # crypto_device= 00:41:44.628 02:11:44 -- bdev/blockdev.sh@684 -- # dek= 00:41:44.628 02:11:44 -- bdev/blockdev.sh@685 -- # env_ctx= 00:41:44.628 02:11:44 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:41:44.628 02:11:44 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:41:44.628 02:11:44 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:41:44.628 02:11:44 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:41:44.628 02:11:44 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:41:44.628 02:11:44 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=146706 00:41:44.628 02:11:44 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:44.628 02:11:44 -- bdev/blockdev.sh@49 -- # waitforlisten 146706 00:41:44.628 02:11:44 -- common/autotest_common.sh@817 -- # '[' -z 146706 ']' 00:41:44.628 02:11:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.628 02:11:44 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:41:44.628 02:11:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:44.628 02:11:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.628 02:11:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:44.628 02:11:44 -- common/autotest_common.sh@10 -- # set +x 00:41:44.628 [2024-04-24 02:11:44.676146] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:44.628 [2024-04-24 02:11:44.676314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146706 ] 00:41:44.886 [2024-04-24 02:11:44.839594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.144 [2024-04-24 02:11:45.102858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.195 02:11:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:46.195 02:11:46 -- common/autotest_common.sh@850 -- # return 0 00:41:46.195 02:11:46 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:41:46.195 02:11:46 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:41:46.195 02:11:46 -- bdev/blockdev.sh@81 -- # local json 00:41:46.195 02:11:46 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:41:46.195 02:11:46 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:46.195 02:11:46 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@740 -- # cat 00:41:46.195 02:11:46 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:41:46.195 02:11:46 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:41:46.195 02:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:46.195 02:11:46 -- common/autotest_common.sh@10 -- # set +x 00:41:46.195 02:11:46 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:41:46.195 02:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:46.195 02:11:46 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:41:46.195 02:11:46 -- bdev/blockdev.sh@749 -- # jq -r .name 00:41:46.195 02:11:46 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6e5d9bff-c09e-4c5b-900c-fe6c1244fe24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6e5d9bff-c09e-4c5b-900c-fe6c1244fe24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:41:46.453 02:11:46 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:41:46.453 02:11:46 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:41:46.453 02:11:46 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:41:46.454 02:11:46 -- bdev/blockdev.sh@754 -- # killprocess 146706 00:41:46.454 02:11:46 -- common/autotest_common.sh@936 -- # '[' -z 146706 ']' 00:41:46.454 02:11:46 -- common/autotest_common.sh@940 -- # kill -0 146706 00:41:46.454 02:11:46 -- common/autotest_common.sh@941 -- # uname 00:41:46.454 02:11:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:46.454 02:11:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146706 00:41:46.454 02:11:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:46.454 02:11:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:46.454 02:11:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146706' 00:41:46.454 killing process with pid 146706 00:41:46.454 02:11:46 -- common/autotest_common.sh@955 -- # kill 146706 00:41:46.454 02:11:46 -- common/autotest_common.sh@960 -- # wait 146706 00:41:49.738 02:11:49 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:49.738 02:11:49 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:41:49.738 02:11:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:41:49.738 02:11:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:49.738 02:11:49 -- common/autotest_common.sh@10 -- # set +x 00:41:49.738 ************************************ 00:41:49.738 START TEST bdev_hello_world 00:41:49.738 ************************************ 00:41:49.738 02:11:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:41:49.738 [2024-04-24 02:11:49.210657] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:49.738 [2024-04-24 02:11:49.210814] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146807 ] 00:41:49.738 [2024-04-24 02:11:49.371996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.738 [2024-04-24 02:11:49.603427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.997 [2024-04-24 02:11:50.075922] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:41:49.997 [2024-04-24 02:11:50.076008] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:41:49.997 [2024-04-24 02:11:50.076046] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:41:49.997 [2024-04-24 02:11:50.079318] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:41:49.997 [2024-04-24 02:11:50.080015] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:41:49.997 [2024-04-24 02:11:50.080084] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:41:49.997 [2024-04-24 02:11:50.080338] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:41:49.997 00:41:49.997 [2024-04-24 02:11:50.080376] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:41:51.371 00:41:51.371 real 0m2.259s 00:41:51.371 user 0m1.896s 00:41:51.371 sys 0m0.264s 00:41:51.371 02:11:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:51.371 02:11:51 -- common/autotest_common.sh@10 -- # set +x 00:41:51.371 ************************************ 00:41:51.371 END TEST bdev_hello_world 00:41:51.371 ************************************ 00:41:51.371 02:11:51 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:41:51.371 02:11:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:41:51.371 02:11:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:51.371 02:11:51 -- common/autotest_common.sh@10 -- # set +x 00:41:51.630 ************************************ 00:41:51.630 START TEST bdev_bounds 00:41:51.630 ************************************ 00:41:51.630 02:11:51 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:41:51.630 02:11:51 -- bdev/blockdev.sh@290 -- # bdevio_pid=146857 00:41:51.630 02:11:51 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:41:51.630 02:11:51 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:41:51.630 02:11:51 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 146857' 00:41:51.630 Process bdevio pid: 146857 00:41:51.630 02:11:51 -- bdev/blockdev.sh@293 -- # waitforlisten 146857 00:41:51.630 02:11:51 -- common/autotest_common.sh@817 -- # '[' -z 146857 ']' 00:41:51.630 02:11:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:51.630 02:11:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:51.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:51.630 02:11:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:51.630 02:11:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:51.630 02:11:51 -- common/autotest_common.sh@10 -- # set +x 00:41:51.630 [2024-04-24 02:11:51.574834] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:51.630 [2024-04-24 02:11:51.575026] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146857 ] 00:41:51.888 [2024-04-24 02:11:51.758761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:52.145 [2024-04-24 02:11:52.040382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:52.145 [2024-04-24 02:11:52.040457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:52.145 [2024-04-24 02:11:52.040463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.755 02:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:52.755 02:11:52 -- common/autotest_common.sh@850 -- # return 0 00:41:52.755 02:11:52 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:41:52.755 I/O targets: 00:41:52.755 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:41:52.755 00:41:52.755 00:41:52.755 CUnit - A unit testing framework for C - Version 2.1-3 00:41:52.755 http://cunit.sourceforge.net/ 00:41:52.755 00:41:52.755 00:41:52.755 Suite: bdevio tests on: Nvme0n1 00:41:52.755 Test: blockdev write read block ...passed 00:41:52.755 Test: blockdev write zeroes read block ...passed 00:41:52.755 Test: blockdev write zeroes read no split ...passed 00:41:52.755 Test: blockdev write zeroes read split ...passed 00:41:53.013 Test: blockdev write zeroes read split partial ...passed 00:41:53.013 Test: blockdev reset ...[2024-04-24 02:11:52.873636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:41:53.013 [2024-04-24 02:11:52.878023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:53.013 passed 00:41:53.013 Test: blockdev write read 8 blocks ...passed 00:41:53.013 Test: blockdev write read size > 128k ...passed 00:41:53.013 Test: blockdev write read invalid size ...passed 00:41:53.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:53.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:53.013 Test: blockdev write read max offset ...passed 00:41:53.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:53.013 Test: blockdev writev readv 8 blocks ...passed 00:41:53.013 Test: blockdev writev readv 30 x 1block ...passed 00:41:53.013 Test: blockdev writev readv block ...passed 00:41:53.013 Test: blockdev writev readv size > 128k ...passed 00:41:53.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:53.013 Test: blockdev comparev and writev ...[2024-04-24 02:11:52.885164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1060d000 len:0x1000 00:41:53.013 [2024-04-24 02:11:52.885292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:41:53.013 passed 00:41:53.013 Test: blockdev nvme passthru rw ...passed 00:41:53.013 Test: blockdev nvme passthru vendor specific ...[2024-04-24 02:11:52.886008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:41:53.013 [2024-04-24 02:11:52.886074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:41:53.013 passed 00:41:53.013 Test: blockdev nvme admin passthru ...passed 00:41:53.013 Test: blockdev copy ...passed 00:41:53.013 00:41:53.013 Run Summary: Type Total Ran Passed Failed Inactive 00:41:53.013 suites 1 1 n/a 0 0 00:41:53.013 tests 23 23 23 0 0 00:41:53.013 asserts 152 152 152 0 n/a 00:41:53.013 00:41:53.013 Elapsed time = 0.321 seconds 00:41:53.013 0 00:41:53.013 02:11:52 -- bdev/blockdev.sh@295 -- # killprocess 146857 00:41:53.013 02:11:52 -- common/autotest_common.sh@936 -- # '[' -z 146857 ']' 00:41:53.013 02:11:52 -- common/autotest_common.sh@940 -- # kill -0 146857 00:41:53.013 02:11:52 -- common/autotest_common.sh@941 -- # uname 00:41:53.013 02:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:53.013 02:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146857 00:41:53.013 02:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:53.013 killing process with pid 146857 00:41:53.013 02:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:53.013 02:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146857' 00:41:53.013 02:11:52 -- common/autotest_common.sh@955 -- # kill 146857 00:41:53.013 02:11:52 -- common/autotest_common.sh@960 -- # wait 146857 00:41:54.913 02:11:54 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:41:54.913 00:41:54.913 real 0m2.989s 00:41:54.913 user 0m7.059s 00:41:54.913 sys 0m0.388s 00:41:54.913 ************************************ 00:41:54.913 END TEST bdev_bounds 00:41:54.913 ************************************ 00:41:54.913 02:11:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:54.913 02:11:54 -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 02:11:54 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:41:54.913 02:11:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:41:54.913 02:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:54.913 02:11:54 -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 ************************************ 00:41:54.913 START TEST bdev_nbd 00:41:54.913 ************************************ 00:41:54.913 02:11:54 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:41:54.913 02:11:54 -- bdev/blockdev.sh@300 -- # uname -s 00:41:54.913 02:11:54 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:41:54.913 02:11:54 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:54.913 02:11:54 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:54.913 02:11:54 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:41:54.913 02:11:54 -- bdev/blockdev.sh@304 -- # local bdev_all 00:41:54.913 02:11:54 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:41:54.913 02:11:54 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:41:54.913 02:11:54 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:41:54.913 02:11:54 -- bdev/blockdev.sh@311 -- # local nbd_all 00:41:54.913 02:11:54 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:41:54.913 02:11:54 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:41:54.913 02:11:54 -- bdev/blockdev.sh@314 -- # local nbd_list 00:41:54.913 02:11:54 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:41:54.913 02:11:54 -- bdev/blockdev.sh@315 -- # local bdev_list 00:41:54.913 02:11:54 -- bdev/blockdev.sh@318 -- # nbd_pid=146930 00:41:54.913 02:11:54 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:41:54.913 02:11:54 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:41:54.913 02:11:54 -- bdev/blockdev.sh@320 -- # waitforlisten 146930 /var/tmp/spdk-nbd.sock 00:41:54.913 02:11:54 -- common/autotest_common.sh@817 -- # '[' -z 146930 ']' 00:41:54.913 02:11:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:54.913 02:11:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:54.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:54.913 02:11:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:54.913 02:11:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:54.913 02:11:54 -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 [2024-04-24 02:11:54.643675] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:41:54.913 [2024-04-24 02:11:54.643857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.913 [2024-04-24 02:11:54.810218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.172 [2024-04-24 02:11:55.050044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:55.747 02:11:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:55.747 02:11:55 -- common/autotest_common.sh@850 -- # return 0 00:41:55.747 02:11:55 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@24 -- # local i 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:41:55.747 02:11:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:41:56.006 02:11:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:41:56.006 02:11:55 -- common/autotest_common.sh@855 -- # local i 00:41:56.006 02:11:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:56.006 02:11:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:56.006 02:11:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:41:56.006 02:11:55 -- common/autotest_common.sh@859 -- # break 00:41:56.006 02:11:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:56.006 02:11:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:56.006 02:11:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:56.006 1+0 records in 00:41:56.006 1+0 records out 00:41:56.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369449 s, 11.1 MB/s 00:41:56.006 02:11:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:56.006 02:11:55 -- common/autotest_common.sh@872 -- # size=4096 00:41:56.006 02:11:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:56.006 02:11:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:56.006 02:11:55 -- common/autotest_common.sh@875 -- # return 0 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:41:56.006 02:11:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:41:56.264 { 00:41:56.264 "nbd_device": "/dev/nbd0", 00:41:56.264 "bdev_name": "Nvme0n1" 00:41:56.264 } 00:41:56.264 ]' 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:41:56.264 { 00:41:56.264 "nbd_device": "/dev/nbd0", 00:41:56.264 "bdev_name": "Nvme0n1" 00:41:56.264 } 00:41:56.264 ]' 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@51 -- # local i 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:56.264 02:11:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@41 -- # break 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@45 -- # return 0 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:56.522 02:11:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@65 -- # true 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@65 -- # count=0 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:41:56.781 02:11:56 -- bdev/nbd_common.sh@122 -- # count=0 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@127 -- # return 0 00:41:56.782 02:11:56 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@12 -- # local i 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:56.782 02:11:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:41:57.039 /dev/nbd0 00:41:57.039 02:11:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:57.039 02:11:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:57.039 02:11:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:41:57.039 02:11:56 -- common/autotest_common.sh@855 -- # local i 00:41:57.039 02:11:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:57.039 02:11:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:57.039 02:11:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:41:57.039 02:11:56 -- common/autotest_common.sh@859 -- # break 00:41:57.040 02:11:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:57.040 02:11:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:57.040 02:11:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:57.040 1+0 records in 00:41:57.040 1+0 records out 00:41:57.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541683 s, 7.6 MB/s 00:41:57.040 02:11:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:57.040 02:11:56 -- common/autotest_common.sh@872 -- # size=4096 00:41:57.040 02:11:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:57.040 02:11:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:57.040 02:11:56 -- common/autotest_common.sh@875 -- # return 0 00:41:57.040 02:11:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:57.040 02:11:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:57.040 02:11:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:57.040 02:11:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:57.040 02:11:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:57.298 { 00:41:57.298 "nbd_device": "/dev/nbd0", 00:41:57.298 "bdev_name": "Nvme0n1" 00:41:57.298 } 00:41:57.298 ]' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:57.298 { 00:41:57.298 "nbd_device": "/dev/nbd0", 00:41:57.298 "bdev_name": "Nvme0n1" 00:41:57.298 } 00:41:57.298 ]' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@65 -- # count=1 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@66 -- # echo 1 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@95 -- # count=1 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:41:57.298 256+0 records in 00:41:57.298 256+0 records out 00:41:57.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773244 s, 136 MB/s 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:57.298 256+0 records in 00:41:57.298 256+0 records out 00:41:57.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0472696 s, 22.2 MB/s 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@51 -- # local i 00:41:57.298 02:11:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:57.299 02:11:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:57.556 02:11:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@41 -- # break 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@45 -- # return 0 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:57.814 02:11:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@65 -- # true 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@65 -- # count=0 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@104 -- # count=0 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@109 -- # return 0 00:41:58.072 02:11:57 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:41:58.072 02:11:57 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:41:58.330 malloc_lvol_verify 00:41:58.330 02:11:58 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:41:58.589 da42650c-dee7-426e-84ad-2affbf0d401e 00:41:58.589 02:11:58 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:41:58.848 bb8a27cc-b82d-49ca-896d-f435e43c345c 00:41:58.848 02:11:58 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:41:59.106 /dev/nbd0 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:41:59.106 mke2fs 1.46.5 (30-Dec-2021) 00:41:59.106 00:41:59.106 Filesystem too small for a journal 00:41:59.106 Discarding device blocks: 0/1024 done 00:41:59.106 Creating filesystem with 1024 4k blocks and 1024 inodes 00:41:59.106 00:41:59.106 Allocating group tables: 0/1 done 00:41:59.106 Writing inode tables: 0/1 done 00:41:59.106 Writing superblocks and filesystem accounting information: 0/1 done 00:41:59.106 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@51 -- # local i 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:59.106 02:11:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@41 -- # break 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@45 -- # return 0 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:41:59.369 02:11:59 -- bdev/nbd_common.sh@147 -- # return 0 00:41:59.369 02:11:59 -- bdev/blockdev.sh@326 -- # killprocess 146930 00:41:59.369 02:11:59 -- common/autotest_common.sh@936 -- # '[' -z 146930 ']' 00:41:59.369 02:11:59 -- common/autotest_common.sh@940 -- # kill -0 146930 00:41:59.369 02:11:59 -- common/autotest_common.sh@941 -- # uname 00:41:59.369 02:11:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:59.369 02:11:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146930 00:41:59.369 02:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:59.369 02:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:59.369 02:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146930' 00:41:59.369 killing process with pid 146930 00:41:59.369 02:11:59 -- common/autotest_common.sh@955 -- # kill 146930 00:41:59.369 02:11:59 -- common/autotest_common.sh@960 -- # wait 146930 00:42:01.270 ************************************ 00:42:01.270 END TEST bdev_nbd 00:42:01.270 ************************************ 00:42:01.270 02:12:00 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:42:01.270 00:42:01.270 real 0m6.314s 00:42:01.270 user 0m8.665s 00:42:01.270 sys 0m1.544s 00:42:01.270 02:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:01.270 02:12:00 -- common/autotest_common.sh@10 -- # set +x 00:42:01.270 02:12:00 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:42:01.270 02:12:00 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:42:01.270 02:12:00 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:42:01.270 skipping fio tests on NVMe due to multi-ns failures. 00:42:01.270 02:12:00 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:01.270 02:12:00 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:01.270 02:12:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:42:01.270 02:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:01.270 02:12:00 -- common/autotest_common.sh@10 -- # set +x 00:42:01.270 ************************************ 00:42:01.270 START TEST bdev_verify 00:42:01.270 ************************************ 00:42:01.270 02:12:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:01.270 [2024-04-24 02:12:01.066454] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:01.270 [2024-04-24 02:12:01.066813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147140 ] 00:42:01.270 [2024-04-24 02:12:01.241012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:01.528 [2024-04-24 02:12:01.532388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.528 [2024-04-24 02:12:01.532392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.093 Running I/O for 5 seconds... 00:42:07.357 00:42:07.357 Latency(us) 00:42:07.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.357 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:07.357 Verification LBA range: start 0x0 length 0xa0000 00:42:07.357 Nvme0n1 : 5.01 9968.29 38.94 0.00 0.00 12773.50 1201.49 22219.82 00:42:07.357 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:07.357 Verification LBA range: start 0xa0000 length 0xa0000 00:42:07.357 Nvme0n1 : 5.01 9996.22 39.05 0.00 0.00 12738.20 756.78 24092.28 00:42:07.357 =================================================================================================================== 00:42:07.357 Total : 19964.51 77.99 0.00 0.00 12755.82 756.78 24092.28 00:42:09.268 ************************************ 00:42:09.268 END TEST bdev_verify 00:42:09.268 ************************************ 00:42:09.268 00:42:09.268 real 0m7.989s 00:42:09.268 user 0m14.529s 00:42:09.268 sys 0m0.271s 00:42:09.268 02:12:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:09.268 02:12:08 -- common/autotest_common.sh@10 -- # set +x 00:42:09.268 02:12:09 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:09.268 02:12:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:42:09.268 02:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:09.268 02:12:09 -- common/autotest_common.sh@10 -- # set +x 00:42:09.268 ************************************ 00:42:09.268 START TEST bdev_verify_big_io 00:42:09.268 ************************************ 00:42:09.268 02:12:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:09.268 [2024-04-24 02:12:09.167422] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:09.268 [2024-04-24 02:12:09.168170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147259 ] 00:42:09.525 [2024-04-24 02:12:09.359195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:09.783 [2024-04-24 02:12:09.665179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:09.783 [2024-04-24 02:12:09.665182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.348 Running I/O for 5 seconds... 00:42:15.661 00:42:15.661 Latency(us) 00:42:15.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.661 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:15.661 Verification LBA range: start 0x0 length 0xa000 00:42:15.661 Nvme0n1 : 5.07 823.73 51.48 0.00 0.00 151567.54 1014.25 182751.82 00:42:15.661 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:15.661 Verification LBA range: start 0xa000 length 0xa000 00:42:15.661 Nvme0n1 : 5.09 829.79 51.86 0.00 0.00 150075.30 1544.78 181753.17 00:42:15.661 =================================================================================================================== 00:42:15.661 Total : 1653.51 103.34 0.00 0.00 150817.33 1014.25 182751.82 00:42:17.565 ************************************ 00:42:17.565 END TEST bdev_verify_big_io 00:42:17.565 ************************************ 00:42:17.565 00:42:17.565 real 0m8.212s 00:42:17.565 user 0m14.949s 00:42:17.565 sys 0m0.261s 00:42:17.565 02:12:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:17.565 02:12:17 -- common/autotest_common.sh@10 -- # set +x 00:42:17.565 02:12:17 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:17.565 02:12:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:42:17.565 02:12:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:17.565 02:12:17 -- common/autotest_common.sh@10 -- # set +x 00:42:17.565 ************************************ 00:42:17.565 START TEST bdev_write_zeroes 00:42:17.565 ************************************ 00:42:17.565 02:12:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:17.565 [2024-04-24 02:12:17.476253] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:17.565 [2024-04-24 02:12:17.476706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147379 ] 00:42:17.822 [2024-04-24 02:12:17.658406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.822 [2024-04-24 02:12:17.897998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.389 Running I/O for 1 seconds... 00:42:19.760 00:42:19.761 Latency(us) 00:42:19.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.761 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:19.761 Nvme0n1 : 1.00 54620.95 213.36 0.00 0.00 2337.26 920.62 13044.78 00:42:19.761 =================================================================================================================== 00:42:19.761 Total : 54620.95 213.36 0.00 0.00 2337.26 920.62 13044.78 00:42:21.132 ************************************ 00:42:21.132 END TEST bdev_write_zeroes 00:42:21.132 ************************************ 00:42:21.132 00:42:21.132 real 0m3.640s 00:42:21.132 user 0m3.255s 00:42:21.132 sys 0m0.285s 00:42:21.132 02:12:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:21.132 02:12:21 -- common/autotest_common.sh@10 -- # set +x 00:42:21.132 02:12:21 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:21.132 02:12:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:42:21.132 02:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:21.132 02:12:21 -- common/autotest_common.sh@10 -- # set +x 00:42:21.132 ************************************ 00:42:21.132 START TEST bdev_json_nonenclosed 00:42:21.132 ************************************ 00:42:21.132 02:12:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:21.132 [2024-04-24 02:12:21.187117] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:21.132 [2024-04-24 02:12:21.187650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147445 ] 00:42:21.388 [2024-04-24 02:12:21.358286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.646 [2024-04-24 02:12:21.610065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.646 [2024-04-24 02:12:21.610416] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:21.646 [2024-04-24 02:12:21.610569] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:21.646 [2024-04-24 02:12:21.610688] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:22.211 ************************************ 00:42:22.211 END TEST bdev_json_nonenclosed 00:42:22.211 ************************************ 00:42:22.211 00:42:22.211 real 0m1.008s 00:42:22.211 user 0m0.772s 00:42:22.211 sys 0m0.133s 00:42:22.211 02:12:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:22.211 02:12:22 -- common/autotest_common.sh@10 -- # set +x 00:42:22.211 02:12:22 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:22.211 02:12:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:42:22.211 02:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:22.211 02:12:22 -- common/autotest_common.sh@10 -- # set +x 00:42:22.211 ************************************ 00:42:22.211 START TEST bdev_json_nonarray 00:42:22.211 ************************************ 00:42:22.211 02:12:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:22.469 [2024-04-24 02:12:22.296051] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:22.469 [2024-04-24 02:12:22.296323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147497 ] 00:42:22.469 [2024-04-24 02:12:22.481913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.727 [2024-04-24 02:12:22.780505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.727 [2024-04-24 02:12:22.780963] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:22.727 [2024-04-24 02:12:22.781200] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:22.727 [2024-04-24 02:12:22.781415] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:23.293 ************************************ 00:42:23.294 END TEST bdev_json_nonarray 00:42:23.294 ************************************ 00:42:23.294 00:42:23.294 real 0m1.107s 00:42:23.294 user 0m0.842s 00:42:23.294 sys 0m0.164s 00:42:23.294 02:12:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:23.294 02:12:23 -- common/autotest_common.sh@10 -- # set +x 00:42:23.294 02:12:23 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:42:23.294 02:12:23 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:42:23.294 02:12:23 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:42:23.294 02:12:23 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:42:23.294 02:12:23 -- bdev/blockdev.sh@811 -- # cleanup 00:42:23.294 02:12:23 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:42:23.553 02:12:23 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:23.553 02:12:23 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:42:23.553 02:12:23 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:42:23.553 02:12:23 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:42:23.553 02:12:23 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:42:23.553 00:42:23.553 real 0m38.885s 00:42:23.553 user 0m57.036s 00:42:23.553 sys 0m4.249s 00:42:23.553 02:12:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:23.553 02:12:23 -- common/autotest_common.sh@10 -- # set +x 00:42:23.553 ************************************ 00:42:23.553 END TEST blockdev_nvme 00:42:23.553 ************************************ 00:42:23.553 02:12:23 -- spdk/autotest.sh@209 -- # uname -s 00:42:23.553 02:12:23 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:42:23.553 02:12:23 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:23.553 02:12:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:42:23.553 02:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:23.553 02:12:23 -- common/autotest_common.sh@10 -- # set +x 00:42:23.553 ************************************ 00:42:23.553 START TEST blockdev_nvme_gpt 00:42:23.553 ************************************ 00:42:23.553 02:12:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:23.553 * Looking for test storage... 00:42:23.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:23.553 02:12:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:23.553 02:12:23 -- bdev/nbd_common.sh@6 -- # set -e 00:42:23.553 02:12:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:23.553 02:12:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:23.553 02:12:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:23.553 02:12:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:23.553 02:12:23 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:23.553 02:12:23 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:23.553 02:12:23 -- bdev/blockdev.sh@20 -- # : 00:42:23.553 02:12:23 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:23.553 02:12:23 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:23.553 02:12:23 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:23.553 02:12:23 -- bdev/blockdev.sh@674 -- # uname -s 00:42:23.553 02:12:23 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:23.553 02:12:23 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:23.553 02:12:23 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:42:23.553 02:12:23 -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:23.553 02:12:23 -- bdev/blockdev.sh@684 -- # dek= 00:42:23.553 02:12:23 -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:23.553 02:12:23 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:23.553 02:12:23 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:23.553 02:12:23 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:42:23.553 02:12:23 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:42:23.553 02:12:23 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:23.553 02:12:23 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=147587 00:42:23.553 02:12:23 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:23.553 02:12:23 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:23.553 02:12:23 -- bdev/blockdev.sh@49 -- # waitforlisten 147587 00:42:23.553 02:12:23 -- common/autotest_common.sh@817 -- # '[' -z 147587 ']' 00:42:23.553 02:12:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.553 02:12:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:23.553 02:12:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.553 02:12:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:23.553 02:12:23 -- common/autotest_common.sh@10 -- # set +x 00:42:23.884 [2024-04-24 02:12:23.670969] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:23.884 [2024-04-24 02:12:23.671181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147587 ] 00:42:23.884 [2024-04-24 02:12:23.857438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.141 [2024-04-24 02:12:24.138351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.512 02:12:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:25.512 02:12:25 -- common/autotest_common.sh@850 -- # return 0 00:42:25.512 02:12:25 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:25.512 02:12:25 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:42:25.512 02:12:25 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:25.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:25.512 Waiting for block devices as requested 00:42:25.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:25.770 02:12:25 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:42:25.770 02:12:25 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:42:25.770 02:12:25 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:42:25.770 02:12:25 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:42:25.770 02:12:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:42:25.770 02:12:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:42:25.770 02:12:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:25.770 02:12:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:25.770 02:12:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:25.770 02:12:25 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:42:25.770 02:12:25 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:42:25.770 02:12:25 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:42:25.770 02:12:25 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:42:25.770 02:12:25 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:42:25.770 02:12:25 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:42:25.770 02:12:25 -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:42:25.770 02:12:25 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:42:25.770 BYT; 00:42:25.770 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:42:25.770 02:12:25 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:42:25.770 BYT; 00:42:25.770 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:42:25.770 02:12:25 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:42:25.770 02:12:25 -- bdev/blockdev.sh@116 -- # break 00:42:25.770 02:12:25 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:42:25.770 02:12:25 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:42:25.770 02:12:25 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:42:25.770 02:12:25 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:42:26.028 02:12:26 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:42:26.028 02:12:26 -- scripts/common.sh@408 -- # local spdk_guid 00:42:26.028 02:12:26 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:26.028 02:12:26 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:26.028 02:12:26 -- scripts/common.sh@413 -- # IFS='()' 00:42:26.028 02:12:26 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:42:26.028 02:12:26 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:26.029 02:12:26 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:42:26.029 02:12:26 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:26.029 02:12:26 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:26.307 02:12:26 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:26.308 02:12:26 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:42:26.308 02:12:26 -- scripts/common.sh@420 -- # local spdk_guid 00:42:26.308 02:12:26 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:26.308 02:12:26 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:26.308 02:12:26 -- scripts/common.sh@425 -- # IFS='()' 00:42:26.308 02:12:26 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:42:26.308 02:12:26 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:26.308 02:12:26 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:42:26.308 02:12:26 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:26.308 02:12:26 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:26.308 02:12:26 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:26.308 02:12:26 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:42:27.250 The operation has completed successfully. 00:42:27.250 02:12:27 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:42:28.185 The operation has completed successfully. 00:42:28.185 02:12:28 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:28.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:28.751 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:42:29.685 02:12:29 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:42:29.685 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.685 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.685 [] 00:42:29.685 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.685 02:12:29 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:42:29.685 02:12:29 -- bdev/blockdev.sh@81 -- # local json 00:42:29.685 02:12:29 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:29.685 02:12:29 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:29.685 02:12:29 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:29.685 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.685 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.685 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.685 02:12:29 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:29.685 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.685 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.685 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.685 02:12:29 -- bdev/blockdev.sh@740 -- # cat 00:42:29.685 02:12:29 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:29.685 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.686 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.686 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.686 02:12:29 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:29.686 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.686 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.686 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.686 02:12:29 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:29.686 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.686 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.686 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.686 02:12:29 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:29.686 02:12:29 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:29.686 02:12:29 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:29.686 02:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:29.686 02:12:29 -- common/autotest_common.sh@10 -- # set +x 00:42:29.686 02:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:29.686 02:12:29 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:29.943 02:12:29 -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:29.944 02:12:29 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:42:29.944 02:12:29 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:29.944 02:12:29 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:42:29.944 02:12:29 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:29.944 02:12:29 -- bdev/blockdev.sh@754 -- # killprocess 147587 00:42:29.944 02:12:29 -- common/autotest_common.sh@936 -- # '[' -z 147587 ']' 00:42:29.944 02:12:29 -- common/autotest_common.sh@940 -- # kill -0 147587 00:42:29.944 02:12:29 -- common/autotest_common.sh@941 -- # uname 00:42:29.944 02:12:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:29.944 02:12:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147587 00:42:29.944 02:12:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:29.944 02:12:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:29.944 02:12:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147587' 00:42:29.944 killing process with pid 147587 00:42:29.944 02:12:29 -- common/autotest_common.sh@955 -- # kill 147587 00:42:29.944 02:12:29 -- common/autotest_common.sh@960 -- # wait 147587 00:42:33.272 02:12:32 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:33.272 02:12:32 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:33.272 02:12:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:42:33.272 02:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:33.272 02:12:32 -- common/autotest_common.sh@10 -- # set +x 00:42:33.272 ************************************ 00:42:33.272 START TEST bdev_hello_world 00:42:33.272 ************************************ 00:42:33.272 02:12:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:33.272 [2024-04-24 02:12:32.713225] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:33.272 [2024-04-24 02:12:32.713408] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148048 ] 00:42:33.272 [2024-04-24 02:12:32.878810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.272 [2024-04-24 02:12:33.116108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.534 [2024-04-24 02:12:33.587912] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:33.534 [2024-04-24 02:12:33.588240] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:42:33.534 [2024-04-24 02:12:33.588341] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:33.534 [2024-04-24 02:12:33.591787] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:33.534 [2024-04-24 02:12:33.592409] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:33.534 [2024-04-24 02:12:33.592578] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:33.534 [2024-04-24 02:12:33.592931] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:33.534 00:42:33.534 [2024-04-24 02:12:33.593078] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:34.910 00:42:34.910 real 0m2.255s 00:42:34.910 user 0m1.917s 00:42:34.910 sys 0m0.237s 00:42:34.910 02:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:34.910 02:12:34 -- common/autotest_common.sh@10 -- # set +x 00:42:34.910 ************************************ 00:42:34.910 END TEST bdev_hello_world 00:42:34.910 ************************************ 00:42:34.910 02:12:34 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:42:34.910 02:12:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:42:34.910 02:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:34.910 02:12:34 -- common/autotest_common.sh@10 -- # set +x 00:42:35.169 ************************************ 00:42:35.169 START TEST bdev_bounds 00:42:35.169 ************************************ 00:42:35.169 02:12:34 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:42:35.169 02:12:34 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:35.169 02:12:34 -- bdev/blockdev.sh@290 -- # bdevio_pid=148102 00:42:35.169 02:12:34 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:35.169 02:12:34 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 148102' 00:42:35.169 Process bdevio pid: 148102 00:42:35.169 02:12:34 -- bdev/blockdev.sh@293 -- # waitforlisten 148102 00:42:35.169 02:12:34 -- common/autotest_common.sh@817 -- # '[' -z 148102 ']' 00:42:35.169 02:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.169 02:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:35.169 02:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:35.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:35.169 02:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:35.169 02:12:34 -- common/autotest_common.sh@10 -- # set +x 00:42:35.169 [2024-04-24 02:12:35.067389] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:35.169 [2024-04-24 02:12:35.067571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148102 ] 00:42:35.427 [2024-04-24 02:12:35.259197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:35.686 [2024-04-24 02:12:35.517403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.686 [2024-04-24 02:12:35.517516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:35.686 [2024-04-24 02:12:35.517515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:36.254 02:12:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:36.254 02:12:36 -- common/autotest_common.sh@850 -- # return 0 00:42:36.254 02:12:36 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:36.254 I/O targets: 00:42:36.254 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:42:36.254 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:42:36.254 00:42:36.254 00:42:36.254 CUnit - A unit testing framework for C - Version 2.1-3 00:42:36.254 http://cunit.sourceforge.net/ 00:42:36.254 00:42:36.254 00:42:36.254 Suite: bdevio tests on: Nvme0n1p2 00:42:36.254 Test: blockdev write read block ...passed 00:42:36.254 Test: blockdev write zeroes read block ...passed 00:42:36.254 Test: blockdev write zeroes read no split ...passed 00:42:36.254 Test: blockdev write zeroes read split ...passed 00:42:36.254 Test: blockdev write zeroes read split partial ...passed 00:42:36.254 Test: blockdev reset ...[2024-04-24 02:12:36.330244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:36.254 [2024-04-24 02:12:36.334220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:36.254 passed 00:42:36.254 Test: blockdev write read 8 blocks ...passed 00:42:36.254 Test: blockdev write read size > 128k ...passed 00:42:36.254 Test: blockdev write read invalid size ...passed 00:42:36.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:36.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:36.254 Test: blockdev write read max offset ...passed 00:42:36.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:36.513 Test: blockdev writev readv 8 blocks ...passed 00:42:36.513 Test: blockdev writev readv 30 x 1block ...passed 00:42:36.513 Test: blockdev writev readv block ...passed 00:42:36.513 Test: blockdev writev readv size > 128k ...passed 00:42:36.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:36.513 Test: blockdev comparev and writev ...[2024-04-24 02:12:36.342986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x24c0b000 len:0x1000 00:42:36.513 [2024-04-24 02:12:36.343103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:36.513 passed 00:42:36.513 Test: blockdev nvme passthru rw ...passed 00:42:36.513 Test: blockdev nvme passthru vendor specific ...passed 00:42:36.513 Test: blockdev nvme admin passthru ...passed 00:42:36.513 Test: blockdev copy ...passed 00:42:36.513 Suite: bdevio tests on: Nvme0n1p1 00:42:36.513 Test: blockdev write read block ...passed 00:42:36.513 Test: blockdev write zeroes read block ...passed 00:42:36.513 Test: blockdev write zeroes read no split ...passed 00:42:36.513 Test: blockdev write zeroes read split ...passed 00:42:36.513 Test: blockdev write zeroes read split partial ...passed 00:42:36.513 Test: blockdev reset ...[2024-04-24 02:12:36.428850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:36.513 [2024-04-24 02:12:36.432894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:36.513 passed 00:42:36.513 Test: blockdev write read 8 blocks ...passed 00:42:36.513 Test: blockdev write read size > 128k ...passed 00:42:36.513 Test: blockdev write read invalid size ...passed 00:42:36.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:36.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:36.513 Test: blockdev write read max offset ...passed 00:42:36.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:36.513 Test: blockdev writev readv 8 blocks ...passed 00:42:36.513 Test: blockdev writev readv 30 x 1block ...passed 00:42:36.513 Test: blockdev writev readv block ...passed 00:42:36.513 Test: blockdev writev readv size > 128k ...passed 00:42:36.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:36.513 Test: blockdev comparev and writev ...[2024-04-24 02:12:36.440827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x24c0d000 len:0x1000 00:42:36.513 [2024-04-24 02:12:36.440916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:36.513 passed 00:42:36.513 Test: blockdev nvme passthru rw ...passed 00:42:36.513 Test: blockdev nvme passthru vendor specific ...passed 00:42:36.513 Test: blockdev nvme admin passthru ...passed 00:42:36.513 Test: blockdev copy ...passed 00:42:36.513 00:42:36.513 Run Summary: Type Total Ran Passed Failed Inactive 00:42:36.513 suites 2 2 n/a 0 0 00:42:36.513 tests 46 46 46 0 0 00:42:36.513 asserts 284 284 284 0 n/a 00:42:36.513 00:42:36.513 Elapsed time = 0.555 seconds 00:42:36.513 0 00:42:36.513 02:12:36 -- bdev/blockdev.sh@295 -- # killprocess 148102 00:42:36.513 02:12:36 -- common/autotest_common.sh@936 -- # '[' -z 148102 ']' 00:42:36.513 02:12:36 -- common/autotest_common.sh@940 -- # kill -0 148102 00:42:36.513 02:12:36 -- common/autotest_common.sh@941 -- # uname 00:42:36.513 02:12:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:36.513 02:12:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148102 00:42:36.513 02:12:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:36.513 02:12:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:36.513 02:12:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148102' 00:42:36.513 killing process with pid 148102 00:42:36.513 02:12:36 -- common/autotest_common.sh@955 -- # kill 148102 00:42:36.513 02:12:36 -- common/autotest_common.sh@960 -- # wait 148102 00:42:38.460 02:12:38 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:42:38.460 00:42:38.460 real 0m3.028s 00:42:38.460 user 0m7.167s 00:42:38.460 sys 0m0.395s 00:42:38.460 02:12:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:38.460 02:12:38 -- common/autotest_common.sh@10 -- # set +x 00:42:38.460 ************************************ 00:42:38.460 END TEST bdev_bounds 00:42:38.460 ************************************ 00:42:38.460 02:12:38 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:38.460 02:12:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:42:38.460 02:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:38.460 02:12:38 -- common/autotest_common.sh@10 -- # set +x 00:42:38.460 ************************************ 00:42:38.460 START TEST bdev_nbd 00:42:38.460 ************************************ 00:42:38.460 02:12:38 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:38.460 02:12:38 -- bdev/blockdev.sh@300 -- # uname -s 00:42:38.460 02:12:38 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:42:38.460 02:12:38 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:38.460 02:12:38 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:38.460 02:12:38 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:42:38.460 02:12:38 -- bdev/blockdev.sh@304 -- # local bdev_all 00:42:38.460 02:12:38 -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:42:38.460 02:12:38 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:42:38.460 02:12:38 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:38.460 02:12:38 -- bdev/blockdev.sh@311 -- # local nbd_all 00:42:38.460 02:12:38 -- bdev/blockdev.sh@312 -- # bdev_num=2 00:42:38.460 02:12:38 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:38.460 02:12:38 -- bdev/blockdev.sh@314 -- # local nbd_list 00:42:38.460 02:12:38 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:38.460 02:12:38 -- bdev/blockdev.sh@315 -- # local bdev_list 00:42:38.460 02:12:38 -- bdev/blockdev.sh@318 -- # nbd_pid=148175 00:42:38.460 02:12:38 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:38.460 02:12:38 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:38.460 02:12:38 -- bdev/blockdev.sh@320 -- # waitforlisten 148175 /var/tmp/spdk-nbd.sock 00:42:38.460 02:12:38 -- common/autotest_common.sh@817 -- # '[' -z 148175 ']' 00:42:38.460 02:12:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:38.460 02:12:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:38.460 02:12:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:38.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:38.460 02:12:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:38.460 02:12:38 -- common/autotest_common.sh@10 -- # set +x 00:42:38.460 [2024-04-24 02:12:38.210611] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:38.460 [2024-04-24 02:12:38.210818] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:38.460 [2024-04-24 02:12:38.388113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.718 [2024-04-24 02:12:38.627634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.286 02:12:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:39.286 02:12:39 -- common/autotest_common.sh@850 -- # return 0 00:42:39.286 02:12:39 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@24 -- # local i 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:39.286 02:12:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:39.545 02:12:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:42:39.545 02:12:39 -- common/autotest_common.sh@855 -- # local i 00:42:39.545 02:12:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:42:39.545 02:12:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:42:39.545 02:12:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:42:39.545 02:12:39 -- common/autotest_common.sh@859 -- # break 00:42:39.545 02:12:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:42:39.545 02:12:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:42:39.545 02:12:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:39.545 1+0 records in 00:42:39.545 1+0 records out 00:42:39.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420793 s, 9.7 MB/s 00:42:39.545 02:12:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.545 02:12:39 -- common/autotest_common.sh@872 -- # size=4096 00:42:39.545 02:12:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.545 02:12:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:42:39.545 02:12:39 -- common/autotest_common.sh@875 -- # return 0 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:39.545 02:12:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:42:39.804 02:12:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:42:39.804 02:12:39 -- common/autotest_common.sh@855 -- # local i 00:42:39.804 02:12:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:42:39.804 02:12:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:42:39.804 02:12:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:42:39.804 02:12:39 -- common/autotest_common.sh@859 -- # break 00:42:39.804 02:12:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:42:39.804 02:12:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:42:39.804 02:12:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:39.804 1+0 records in 00:42:39.804 1+0 records out 00:42:39.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485271 s, 8.4 MB/s 00:42:39.804 02:12:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.804 02:12:39 -- common/autotest_common.sh@872 -- # size=4096 00:42:39.804 02:12:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.804 02:12:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:42:39.804 02:12:39 -- common/autotest_common.sh@875 -- # return 0 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:39.804 02:12:39 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:40.062 { 00:42:40.062 "nbd_device": "/dev/nbd0", 00:42:40.062 "bdev_name": "Nvme0n1p1" 00:42:40.062 }, 00:42:40.062 { 00:42:40.062 "nbd_device": "/dev/nbd1", 00:42:40.062 "bdev_name": "Nvme0n1p2" 00:42:40.062 } 00:42:40.062 ]' 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:40.062 { 00:42:40.062 "nbd_device": "/dev/nbd0", 00:42:40.062 "bdev_name": "Nvme0n1p1" 00:42:40.062 }, 00:42:40.062 { 00:42:40.062 "nbd_device": "/dev/nbd1", 00:42:40.062 "bdev_name": "Nvme0n1p2" 00:42:40.062 } 00:42:40.062 ]' 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@51 -- # local i 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:40.062 02:12:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@41 -- # break 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@45 -- # return 0 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:40.321 02:12:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:40.579 02:12:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@41 -- # break 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@45 -- # return 0 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:40.580 02:12:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@65 -- # true 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@65 -- # count=0 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@122 -- # count=0 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@127 -- # return 0 00:42:40.838 02:12:40 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@12 -- # local i 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:40.838 02:12:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:42:41.096 /dev/nbd0 00:42:41.096 02:12:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:41.096 02:12:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:41.096 02:12:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:42:41.096 02:12:41 -- common/autotest_common.sh@855 -- # local i 00:42:41.096 02:12:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:42:41.096 02:12:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:42:41.096 02:12:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:42:41.096 02:12:41 -- common/autotest_common.sh@859 -- # break 00:42:41.096 02:12:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:42:41.096 02:12:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:42:41.096 02:12:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:41.096 1+0 records in 00:42:41.096 1+0 records out 00:42:41.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082271 s, 5.0 MB/s 00:42:41.096 02:12:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:41.096 02:12:41 -- common/autotest_common.sh@872 -- # size=4096 00:42:41.096 02:12:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:41.096 02:12:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:42:41.096 02:12:41 -- common/autotest_common.sh@875 -- # return 0 00:42:41.096 02:12:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:41.096 02:12:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:41.096 02:12:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:42:41.356 /dev/nbd1 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:41.356 02:12:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:42:41.356 02:12:41 -- common/autotest_common.sh@855 -- # local i 00:42:41.356 02:12:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:42:41.356 02:12:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:42:41.356 02:12:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:42:41.356 02:12:41 -- common/autotest_common.sh@859 -- # break 00:42:41.356 02:12:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:42:41.356 02:12:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:42:41.356 02:12:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:41.356 1+0 records in 00:42:41.356 1+0 records out 00:42:41.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063629 s, 6.4 MB/s 00:42:41.356 02:12:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:41.356 02:12:41 -- common/autotest_common.sh@872 -- # size=4096 00:42:41.356 02:12:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:41.356 02:12:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:42:41.356 02:12:41 -- common/autotest_common.sh@875 -- # return 0 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:41.356 02:12:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:41.923 { 00:42:41.923 "nbd_device": "/dev/nbd0", 00:42:41.923 "bdev_name": "Nvme0n1p1" 00:42:41.923 }, 00:42:41.923 { 00:42:41.923 "nbd_device": "/dev/nbd1", 00:42:41.923 "bdev_name": "Nvme0n1p2" 00:42:41.923 } 00:42:41.923 ]' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:41.923 { 00:42:41.923 "nbd_device": "/dev/nbd0", 00:42:41.923 "bdev_name": "Nvme0n1p1" 00:42:41.923 }, 00:42:41.923 { 00:42:41.923 "nbd_device": "/dev/nbd1", 00:42:41.923 "bdev_name": "Nvme0n1p2" 00:42:41.923 } 00:42:41.923 ]' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:41.923 /dev/nbd1' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:41.923 /dev/nbd1' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@65 -- # count=2 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@95 -- # count=2 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:41.923 256+0 records in 00:42:41.923 256+0 records out 00:42:41.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00827172 s, 127 MB/s 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:41.923 256+0 records in 00:42:41.923 256+0 records out 00:42:41.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0750393 s, 14.0 MB/s 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:41.923 256+0 records in 00:42:41.923 256+0 records out 00:42:41.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0699211 s, 15.0 MB/s 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:41.923 02:12:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@51 -- # local i 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:41.924 02:12:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@41 -- # break 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:42.194 02:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@41 -- # break 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:42.471 02:12:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@65 -- # true 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@65 -- # count=0 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@104 -- # count=0 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@109 -- # return 0 00:42:42.729 02:12:42 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:42.729 02:12:42 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:42.987 malloc_lvol_verify 00:42:42.987 02:12:43 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:43.244 2c20d1bf-4ea3-4e18-89e7-e6cf7527e46f 00:42:43.244 02:12:43 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:42:43.503 b2c08760-2a2a-4a38-b6e9-43bcf856ae99 00:42:43.503 02:12:43 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:42:43.761 /dev/nbd0 00:42:43.761 02:12:43 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:42:43.761 mke2fs 1.46.5 (30-Dec-2021) 00:42:43.761 00:42:43.761 Filesystem too small for a journal 00:42:43.761 Discarding device blocks: 0/1024 done 00:42:43.761 Creating filesystem with 1024 4k blocks and 1024 inodes 00:42:43.761 00:42:43.761 Allocating group tables: 0/1 done 00:42:43.761 Writing inode tables: 0/1 done 00:42:43.761 Writing superblocks and filesystem accounting information: 0/1 done 00:42:43.761 00:42:43.761 02:12:43 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@51 -- # local i 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@41 -- # break 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:42:43.762 02:12:43 -- bdev/nbd_common.sh@147 -- # return 0 00:42:43.762 02:12:43 -- bdev/blockdev.sh@326 -- # killprocess 148175 00:42:43.762 02:12:43 -- common/autotest_common.sh@936 -- # '[' -z 148175 ']' 00:42:43.762 02:12:43 -- common/autotest_common.sh@940 -- # kill -0 148175 00:42:43.762 02:12:43 -- common/autotest_common.sh@941 -- # uname 00:42:43.762 02:12:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:43.762 02:12:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148175 00:42:43.762 02:12:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:43.762 02:12:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:43.762 02:12:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148175' 00:42:43.762 killing process with pid 148175 00:42:43.762 02:12:43 -- common/autotest_common.sh@955 -- # kill 148175 00:42:43.762 02:12:43 -- common/autotest_common.sh@960 -- # wait 148175 00:42:45.665 ************************************ 00:42:45.665 END TEST bdev_nbd 00:42:45.665 ************************************ 00:42:45.665 02:12:45 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:42:45.665 00:42:45.665 real 0m7.137s 00:42:45.665 user 0m9.700s 00:42:45.665 sys 0m1.998s 00:42:45.665 02:12:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:45.665 02:12:45 -- common/autotest_common.sh@10 -- # set +x 00:42:45.665 02:12:45 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:42:45.665 02:12:45 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:42:45.665 skipping fio tests on NVMe due to multi-ns failures. 00:42:45.665 02:12:45 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:42:45.665 02:12:45 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:42:45.665 02:12:45 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:45.665 02:12:45 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:45.665 02:12:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:42:45.665 02:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:45.665 02:12:45 -- common/autotest_common.sh@10 -- # set +x 00:42:45.665 ************************************ 00:42:45.665 START TEST bdev_verify 00:42:45.665 ************************************ 00:42:45.665 02:12:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:45.665 [2024-04-24 02:12:45.433434] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:45.665 [2024-04-24 02:12:45.433699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148440 ] 00:42:45.665 [2024-04-24 02:12:45.625584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:45.923 [2024-04-24 02:12:45.921422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.923 [2024-04-24 02:12:45.921425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.489 Running I/O for 5 seconds... 00:42:51.792 00:42:51.792 Latency(us) 00:42:51.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.792 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:51.792 Verification LBA range: start 0x0 length 0x4ff80 00:42:51.792 Nvme0n1p1 : 5.02 4855.83 18.97 0.00 0.00 26264.70 1599.39 32455.92 00:42:51.792 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:51.792 Verification LBA range: start 0x4ff80 length 0x4ff80 00:42:51.792 Nvme0n1p1 : 5.02 4845.31 18.93 0.00 0.00 26325.34 4774.77 30583.47 00:42:51.792 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:51.792 Verification LBA range: start 0x0 length 0x4ff7f 00:42:51.792 Nvme0n1p2 : 5.03 4862.88 19.00 0.00 0.00 26217.30 2652.65 32455.92 00:42:51.792 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:51.792 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:42:51.792 Nvme0n1p2 : 5.03 4848.70 18.94 0.00 0.00 26255.63 3229.99 31207.62 00:42:51.792 =================================================================================================================== 00:42:51.792 Total : 19412.72 75.83 0.00 0.00 26265.67 1599.39 32455.92 00:42:53.177 ************************************ 00:42:53.177 END TEST bdev_verify 00:42:53.177 ************************************ 00:42:53.177 00:42:53.177 real 0m7.748s 00:42:53.177 user 0m14.090s 00:42:53.177 sys 0m0.260s 00:42:53.177 02:12:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:53.177 02:12:53 -- common/autotest_common.sh@10 -- # set +x 00:42:53.177 02:12:53 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:53.177 02:12:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:42:53.177 02:12:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:53.177 02:12:53 -- common/autotest_common.sh@10 -- # set +x 00:42:53.177 ************************************ 00:42:53.177 START TEST bdev_verify_big_io 00:42:53.177 ************************************ 00:42:53.177 02:12:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:53.435 [2024-04-24 02:12:53.279140] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:42:53.435 [2024-04-24 02:12:53.279283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148553 ] 00:42:53.435 [2024-04-24 02:12:53.448398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:53.692 [2024-04-24 02:12:53.736102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.692 [2024-04-24 02:12:53.736105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:54.257 Running I/O for 5 seconds... 00:42:59.521 00:42:59.521 Latency(us) 00:42:59.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.522 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:59.522 Verification LBA range: start 0x0 length 0x4ff8 00:42:59.522 Nvme0n1p1 : 5.18 469.65 29.35 0.00 0.00 267971.64 4868.39 329552.46 00:42:59.522 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:59.522 Verification LBA range: start 0x4ff8 length 0x4ff8 00:42:59.522 Nvme0n1p1 : 5.20 443.27 27.70 0.00 0.00 275160.96 1903.66 281617.55 00:42:59.522 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:59.522 Verification LBA range: start 0x0 length 0x4ff7 00:42:59.522 Nvme0n1p2 : 5.18 464.21 29.01 0.00 0.00 263972.79 3635.69 367500.92 00:42:59.522 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:59.522 Verification LBA range: start 0x4ff7 length 0x4ff7 00:42:59.522 Nvme0n1p2 : 5.19 443.69 27.73 0.00 0.00 283134.84 7926.74 293601.28 00:42:59.522 =================================================================================================================== 00:42:59.522 Total : 1820.81 113.80 0.00 0.00 272406.05 1903.66 367500.92 00:43:01.421 ************************************ 00:43:01.421 END TEST bdev_verify_big_io 00:43:01.421 ************************************ 00:43:01.421 00:43:01.421 real 0m7.944s 00:43:01.421 user 0m14.525s 00:43:01.421 sys 0m0.268s 00:43:01.421 02:13:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:01.421 02:13:01 -- common/autotest_common.sh@10 -- # set +x 00:43:01.421 02:13:01 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:01.421 02:13:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:43:01.421 02:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:01.421 02:13:01 -- common/autotest_common.sh@10 -- # set +x 00:43:01.421 ************************************ 00:43:01.421 START TEST bdev_write_zeroes 00:43:01.421 ************************************ 00:43:01.421 02:13:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:01.421 [2024-04-24 02:13:01.338581] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:01.421 [2024-04-24 02:13:01.338729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148670 ] 00:43:01.680 [2024-04-24 02:13:01.506456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:01.939 [2024-04-24 02:13:01.787797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.505 Running I/O for 1 seconds... 00:43:03.440 00:43:03.440 Latency(us) 00:43:03.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.440 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:03.440 Nvme0n1p1 : 1.00 28284.62 110.49 0.00 0.00 4515.41 2746.27 10860.25 00:43:03.440 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:03.440 Nvme0n1p2 : 1.01 28250.26 110.35 0.00 0.00 4515.30 2949.12 10735.42 00:43:03.440 =================================================================================================================== 00:43:03.440 Total : 56534.87 220.84 0.00 0.00 4515.36 2746.27 10860.25 00:43:04.878 ************************************ 00:43:04.878 END TEST bdev_write_zeroes 00:43:04.878 ************************************ 00:43:04.878 00:43:04.878 real 0m3.385s 00:43:04.878 user 0m3.055s 00:43:04.878 sys 0m0.230s 00:43:04.878 02:13:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:04.878 02:13:04 -- common/autotest_common.sh@10 -- # set +x 00:43:04.878 02:13:04 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:04.878 02:13:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:43:04.878 02:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:04.878 02:13:04 -- common/autotest_common.sh@10 -- # set +x 00:43:04.878 ************************************ 00:43:04.878 START TEST bdev_json_nonenclosed 00:43:04.878 ************************************ 00:43:04.878 02:13:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:04.878 [2024-04-24 02:13:04.849806] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:04.878 [2024-04-24 02:13:04.850003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148738 ] 00:43:05.136 [2024-04-24 02:13:05.029561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.394 [2024-04-24 02:13:05.246697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.394 [2024-04-24 02:13:05.246822] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:05.394 [2024-04-24 02:13:05.246875] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:05.394 [2024-04-24 02:13:05.246903] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:05.654 00:43:05.654 real 0m0.931s 00:43:05.654 user 0m0.687s 00:43:05.654 sys 0m0.144s 00:43:05.654 02:13:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:05.654 ************************************ 00:43:05.654 END TEST bdev_json_nonenclosed 00:43:05.654 ************************************ 00:43:05.654 02:13:05 -- common/autotest_common.sh@10 -- # set +x 00:43:05.912 02:13:05 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:05.912 02:13:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:43:05.912 02:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:05.912 02:13:05 -- common/autotest_common.sh@10 -- # set +x 00:43:05.912 ************************************ 00:43:05.912 START TEST bdev_json_nonarray 00:43:05.912 ************************************ 00:43:05.912 02:13:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:05.912 [2024-04-24 02:13:05.865057] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:05.912 [2024-04-24 02:13:05.865215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148781 ] 00:43:06.170 [2024-04-24 02:13:06.024698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:06.170 [2024-04-24 02:13:06.244070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.170 [2024-04-24 02:13:06.244200] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:06.170 [2024-04-24 02:13:06.244234] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:06.170 [2024-04-24 02:13:06.244257] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:06.737 ************************************ 00:43:06.737 END TEST bdev_json_nonarray 00:43:06.737 ************************************ 00:43:06.737 00:43:06.737 real 0m0.910s 00:43:06.737 user 0m0.647s 00:43:06.737 sys 0m0.163s 00:43:06.737 02:13:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:06.737 02:13:06 -- common/autotest_common.sh@10 -- # set +x 00:43:06.737 02:13:06 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:43:06.737 02:13:06 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:43:06.737 02:13:06 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:43:06.737 02:13:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:06.737 02:13:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:06.737 02:13:06 -- common/autotest_common.sh@10 -- # set +x 00:43:06.996 ************************************ 00:43:06.996 START TEST bdev_gpt_uuid 00:43:06.996 ************************************ 00:43:06.996 02:13:06 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:43:06.996 02:13:06 -- bdev/blockdev.sh@614 -- # local bdev 00:43:06.996 02:13:06 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:43:06.996 02:13:06 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=148817 00:43:06.996 02:13:06 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:06.996 02:13:06 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:06.996 02:13:06 -- bdev/blockdev.sh@49 -- # waitforlisten 148817 00:43:06.996 02:13:06 -- common/autotest_common.sh@817 -- # '[' -z 148817 ']' 00:43:06.996 02:13:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:06.996 02:13:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:43:06.996 02:13:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:06.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:06.996 02:13:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:43:06.996 02:13:06 -- common/autotest_common.sh@10 -- # set +x 00:43:06.996 [2024-04-24 02:13:06.910272] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:06.996 [2024-04-24 02:13:06.910449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148817 ] 00:43:06.996 [2024-04-24 02:13:07.074814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.253 [2024-04-24 02:13:07.286484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.187 02:13:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:43:08.187 02:13:08 -- common/autotest_common.sh@850 -- # return 0 00:43:08.187 02:13:08 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:08.187 02:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:08.187 02:13:08 -- common/autotest_common.sh@10 -- # set +x 00:43:08.187 Some configs were skipped because the RPC state that can call them passed over. 00:43:08.187 02:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:08.187 02:13:08 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:43:08.187 02:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:08.187 02:13:08 -- common/autotest_common.sh@10 -- # set +x 00:43:08.187 02:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:08.187 02:13:08 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:43:08.187 02:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:08.187 02:13:08 -- common/autotest_common.sh@10 -- # set +x 00:43:08.187 02:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:08.187 02:13:08 -- bdev/blockdev.sh@621 -- # bdev='[ 00:43:08.187 { 00:43:08.187 "name": "Nvme0n1p1", 00:43:08.187 "aliases": [ 00:43:08.187 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:43:08.187 ], 00:43:08.187 "product_name": "GPT Disk", 00:43:08.187 "block_size": 4096, 00:43:08.187 "num_blocks": 655104, 00:43:08.187 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:08.187 "assigned_rate_limits": { 00:43:08.187 "rw_ios_per_sec": 0, 00:43:08.187 "rw_mbytes_per_sec": 0, 00:43:08.187 "r_mbytes_per_sec": 0, 00:43:08.187 "w_mbytes_per_sec": 0 00:43:08.187 }, 00:43:08.187 "claimed": false, 00:43:08.187 "zoned": false, 00:43:08.187 "supported_io_types": { 00:43:08.187 "read": true, 00:43:08.187 "write": true, 00:43:08.187 "unmap": true, 00:43:08.187 "write_zeroes": true, 00:43:08.187 "flush": true, 00:43:08.187 "reset": true, 00:43:08.187 "compare": true, 00:43:08.187 "compare_and_write": false, 00:43:08.187 "abort": true, 00:43:08.187 "nvme_admin": false, 00:43:08.187 "nvme_io": false 00:43:08.187 }, 00:43:08.187 "driver_specific": { 00:43:08.187 "gpt": { 00:43:08.187 "base_bdev": "Nvme0n1", 00:43:08.187 "offset_blocks": 256, 00:43:08.187 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:43:08.187 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:08.187 "partition_name": "SPDK_TEST_first" 00:43:08.187 } 00:43:08.187 } 00:43:08.187 } 00:43:08.187 ]' 00:43:08.187 02:13:08 -- bdev/blockdev.sh@622 -- # jq -r length 00:43:08.445 02:13:08 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:43:08.445 02:13:08 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:08.445 02:13:08 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:08.445 02:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:08.445 02:13:08 -- common/autotest_common.sh@10 -- # set +x 00:43:08.445 02:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@626 -- # bdev='[ 00:43:08.445 { 00:43:08.445 "name": "Nvme0n1p2", 00:43:08.445 "aliases": [ 00:43:08.445 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:43:08.445 ], 00:43:08.445 "product_name": "GPT Disk", 00:43:08.445 "block_size": 4096, 00:43:08.445 "num_blocks": 655103, 00:43:08.445 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:08.445 "assigned_rate_limits": { 00:43:08.445 "rw_ios_per_sec": 0, 00:43:08.445 "rw_mbytes_per_sec": 0, 00:43:08.445 "r_mbytes_per_sec": 0, 00:43:08.445 "w_mbytes_per_sec": 0 00:43:08.445 }, 00:43:08.445 "claimed": false, 00:43:08.445 "zoned": false, 00:43:08.445 "supported_io_types": { 00:43:08.445 "read": true, 00:43:08.445 "write": true, 00:43:08.445 "unmap": true, 00:43:08.445 "write_zeroes": true, 00:43:08.445 "flush": true, 00:43:08.445 "reset": true, 00:43:08.445 "compare": true, 00:43:08.445 "compare_and_write": false, 00:43:08.445 "abort": true, 00:43:08.445 "nvme_admin": false, 00:43:08.445 "nvme_io": false 00:43:08.445 }, 00:43:08.445 "driver_specific": { 00:43:08.445 "gpt": { 00:43:08.445 "base_bdev": "Nvme0n1", 00:43:08.445 "offset_blocks": 655360, 00:43:08.445 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:43:08.445 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:08.445 "partition_name": "SPDK_TEST_second" 00:43:08.445 } 00:43:08.445 } 00:43:08.445 } 00:43:08.445 ]' 00:43:08.445 02:13:08 -- bdev/blockdev.sh@627 -- # jq -r length 00:43:08.445 02:13:08 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:43:08.445 02:13:08 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:08.445 02:13:08 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:08.703 02:13:08 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:08.703 02:13:08 -- bdev/blockdev.sh@631 -- # killprocess 148817 00:43:08.703 02:13:08 -- common/autotest_common.sh@936 -- # '[' -z 148817 ']' 00:43:08.703 02:13:08 -- common/autotest_common.sh@940 -- # kill -0 148817 00:43:08.703 02:13:08 -- common/autotest_common.sh@941 -- # uname 00:43:08.703 02:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:43:08.703 02:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148817 00:43:08.703 02:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:43:08.703 02:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:43:08.703 02:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148817' 00:43:08.703 killing process with pid 148817 00:43:08.703 02:13:08 -- common/autotest_common.sh@955 -- # kill 148817 00:43:08.703 02:13:08 -- common/autotest_common.sh@960 -- # wait 148817 00:43:11.234 ************************************ 00:43:11.234 END TEST bdev_gpt_uuid 00:43:11.234 ************************************ 00:43:11.234 00:43:11.234 real 0m4.184s 00:43:11.234 user 0m4.473s 00:43:11.234 sys 0m0.502s 00:43:11.234 02:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:11.234 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:43:11.235 02:13:11 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:43:11.235 02:13:11 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:11.235 02:13:11 -- bdev/blockdev.sh@811 -- # cleanup 00:43:11.235 02:13:11 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:11.235 02:13:11 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:11.235 02:13:11 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:43:11.235 02:13:11 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:43:11.235 02:13:11 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:43:11.235 02:13:11 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:11.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:11.493 Waiting for block devices as requested 00:43:11.493 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:11.751 02:13:11 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:43:11.751 02:13:11 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:43:11.751 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:43:11.751 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:43:11.751 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:43:11.751 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:43:11.751 02:13:11 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:43:11.751 00:43:11.751 real 0m48.192s 00:43:11.751 user 1m6.510s 00:43:11.751 sys 0m7.295s 00:43:11.751 02:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:11.751 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:43:11.751 ************************************ 00:43:11.751 END TEST blockdev_nvme_gpt 00:43:11.751 ************************************ 00:43:11.751 02:13:11 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:11.751 02:13:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:11.751 02:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:11.751 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:43:11.751 ************************************ 00:43:11.751 START TEST nvme 00:43:11.751 ************************************ 00:43:11.751 02:13:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:12.010 * Looking for test storage... 00:43:12.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:43:12.010 02:13:11 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:12.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:12.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:13.460 02:13:13 -- nvme/nvme.sh@79 -- # uname 00:43:13.460 02:13:13 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:43:13.460 02:13:13 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:43:13.460 02:13:13 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:43:13.460 02:13:13 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:43:13.460 02:13:13 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:43:13.460 02:13:13 -- common/autotest_common.sh@1055 -- # echo 0 00:43:13.460 02:13:13 -- common/autotest_common.sh@1057 -- # stubpid=149248 00:43:13.460 02:13:13 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:43:13.460 02:13:13 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:43:13.460 Waiting for stub to ready for secondary processes... 00:43:13.460 02:13:13 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:13.460 02:13:13 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149248 ]] 00:43:13.460 02:13:13 -- common/autotest_common.sh@1062 -- # sleep 1s 00:43:13.460 [2024-04-24 02:13:13.495851] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:13.460 [2024-04-24 02:13:13.496048] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:43:14.394 02:13:14 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:14.394 02:13:14 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149248 ]] 00:43:14.394 02:13:14 -- common/autotest_common.sh@1062 -- # sleep 1s 00:43:14.651 [2024-04-24 02:13:14.607359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:14.910 [2024-04-24 02:13:14.803313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:14.910 [2024-04-24 02:13:14.803467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.910 [2024-04-24 02:13:14.803465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:43:14.910 [2024-04-24 02:13:14.814547] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:43:14.910 [2024-04-24 02:13:14.814637] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:43:14.910 [2024-04-24 02:13:14.823933] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:43:14.910 [2024-04-24 02:13:14.826009] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:43:15.475 02:13:15 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:15.475 done. 00:43:15.475 02:13:15 -- common/autotest_common.sh@1064 -- # echo done. 00:43:15.475 02:13:15 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:15.475 02:13:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:43:15.475 02:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:15.475 02:13:15 -- common/autotest_common.sh@10 -- # set +x 00:43:15.475 ************************************ 00:43:15.475 START TEST nvme_reset 00:43:15.475 ************************************ 00:43:15.475 02:13:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:15.733 Initializing NVMe Controllers 00:43:15.733 Skipping QEMU NVMe SSD at 0000:00:10.0 00:43:15.733 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:43:15.733 00:43:15.733 real 0m0.318s 00:43:15.733 user 0m0.105s 00:43:15.733 sys 0m0.143s 00:43:15.733 02:13:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:15.733 02:13:15 -- common/autotest_common.sh@10 -- # set +x 00:43:15.733 ************************************ 00:43:15.733 END TEST nvme_reset 00:43:15.733 ************************************ 00:43:15.990 02:13:15 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:43:15.990 02:13:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:15.990 02:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:15.990 02:13:15 -- common/autotest_common.sh@10 -- # set +x 00:43:15.990 ************************************ 00:43:15.990 START TEST nvme_identify 00:43:15.991 ************************************ 00:43:15.991 02:13:15 -- common/autotest_common.sh@1111 -- # nvme_identify 00:43:15.991 02:13:15 -- nvme/nvme.sh@12 -- # bdfs=() 00:43:15.991 02:13:15 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:43:15.991 02:13:15 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:43:15.991 02:13:15 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:43:15.991 02:13:15 -- common/autotest_common.sh@1499 -- # bdfs=() 00:43:15.991 02:13:15 -- common/autotest_common.sh@1499 -- # local bdfs 00:43:15.991 02:13:15 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:15.991 02:13:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:43:15.991 02:13:15 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:15.991 02:13:15 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:43:15.991 02:13:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:43:15.991 02:13:15 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:43:16.250 [2024-04-24 02:13:16.266538] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 149289 terminated unexpected 00:43:16.250 ===================================================== 00:43:16.250 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:16.250 ===================================================== 00:43:16.250 Controller Capabilities/Features 00:43:16.250 ================================ 00:43:16.250 Vendor ID: 1b36 00:43:16.250 Subsystem Vendor ID: 1af4 00:43:16.250 Serial Number: 12340 00:43:16.250 Model Number: QEMU NVMe Ctrl 00:43:16.250 Firmware Version: 8.0.0 00:43:16.250 Recommended Arb Burst: 6 00:43:16.250 IEEE OUI Identifier: 00 54 52 00:43:16.250 Multi-path I/O 00:43:16.250 May have multiple subsystem ports: No 00:43:16.250 May have multiple controllers: No 00:43:16.250 Associated with SR-IOV VF: No 00:43:16.250 Max Data Transfer Size: 524288 00:43:16.250 Max Number of Namespaces: 256 00:43:16.250 Max Number of I/O Queues: 64 00:43:16.250 NVMe Specification Version (VS): 1.4 00:43:16.250 NVMe Specification Version (Identify): 1.4 00:43:16.250 Maximum Queue Entries: 2048 00:43:16.250 Contiguous Queues Required: Yes 00:43:16.250 Arbitration Mechanisms Supported 00:43:16.250 Weighted Round Robin: Not Supported 00:43:16.250 Vendor Specific: Not Supported 00:43:16.250 Reset Timeout: 7500 ms 00:43:16.250 Doorbell Stride: 4 bytes 00:43:16.250 NVM Subsystem Reset: Not Supported 00:43:16.250 Command Sets Supported 00:43:16.250 NVM Command Set: Supported 00:43:16.250 Boot Partition: Not Supported 00:43:16.250 Memory Page Size Minimum: 4096 bytes 00:43:16.250 Memory Page Size Maximum: 65536 bytes 00:43:16.250 Persistent Memory Region: Not Supported 00:43:16.250 Optional Asynchronous Events Supported 00:43:16.250 Namespace Attribute Notices: Supported 00:43:16.250 Firmware Activation Notices: Not Supported 00:43:16.250 ANA Change Notices: Not Supported 00:43:16.250 PLE Aggregate Log Change Notices: Not Supported 00:43:16.250 LBA Status Info Alert Notices: Not Supported 00:43:16.250 EGE Aggregate Log Change Notices: Not Supported 00:43:16.250 Normal NVM Subsystem Shutdown event: Not Supported 00:43:16.250 Zone Descriptor Change Notices: Not Supported 00:43:16.250 Discovery Log Change Notices: Not Supported 00:43:16.250 Controller Attributes 00:43:16.250 128-bit Host Identifier: Not Supported 00:43:16.250 Non-Operational Permissive Mode: Not Supported 00:43:16.250 NVM Sets: Not Supported 00:43:16.250 Read Recovery Levels: Not Supported 00:43:16.250 Endurance Groups: Not Supported 00:43:16.250 Predictable Latency Mode: Not Supported 00:43:16.250 Traffic Based Keep ALive: Not Supported 00:43:16.250 Namespace Granularity: Not Supported 00:43:16.250 SQ Associations: Not Supported 00:43:16.250 UUID List: Not Supported 00:43:16.250 Multi-Domain Subsystem: Not Supported 00:43:16.250 Fixed Capacity Management: Not Supported 00:43:16.250 Variable Capacity Management: Not Supported 00:43:16.250 Delete Endurance Group: Not Supported 00:43:16.250 Delete NVM Set: Not Supported 00:43:16.250 Extended LBA Formats Supported: Supported 00:43:16.250 Flexible Data Placement Supported: Not Supported 00:43:16.250 00:43:16.250 Controller Memory Buffer Support 00:43:16.250 ================================ 00:43:16.250 Supported: No 00:43:16.250 00:43:16.250 Persistent Memory Region Support 00:43:16.250 ================================ 00:43:16.250 Supported: No 00:43:16.250 00:43:16.250 Admin Command Set Attributes 00:43:16.250 ============================ 00:43:16.250 Security Send/Receive: Not Supported 00:43:16.250 Format NVM: Supported 00:43:16.250 Firmware Activate/Download: Not Supported 00:43:16.250 Namespace Management: Supported 00:43:16.250 Device Self-Test: Not Supported 00:43:16.250 Directives: Supported 00:43:16.250 NVMe-MI: Not Supported 00:43:16.250 Virtualization Management: Not Supported 00:43:16.250 Doorbell Buffer Config: Supported 00:43:16.250 Get LBA Status Capability: Not Supported 00:43:16.250 Command & Feature Lockdown Capability: Not Supported 00:43:16.250 Abort Command Limit: 4 00:43:16.250 Async Event Request Limit: 4 00:43:16.250 Number of Firmware Slots: N/A 00:43:16.250 Firmware Slot 1 Read-Only: N/A 00:43:16.250 Firmware Activation Without Reset: N/A 00:43:16.250 Multiple Update Detection Support: N/A 00:43:16.250 Firmware Update Granularity: No Information Provided 00:43:16.250 Per-Namespace SMART Log: Yes 00:43:16.250 Asymmetric Namespace Access Log Page: Not Supported 00:43:16.250 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:16.250 Command Effects Log Page: Supported 00:43:16.250 Get Log Page Extended Data: Supported 00:43:16.250 Telemetry Log Pages: Not Supported 00:43:16.250 Persistent Event Log Pages: Not Supported 00:43:16.250 Supported Log Pages Log Page: May Support 00:43:16.250 Commands Supported & Effects Log Page: Not Supported 00:43:16.250 Feature Identifiers & Effects Log Page:May Support 00:43:16.250 NVMe-MI Commands & Effects Log Page: May Support 00:43:16.250 Data Area 4 for Telemetry Log: Not Supported 00:43:16.250 Error Log Page Entries Supported: 1 00:43:16.250 Keep Alive: Not Supported 00:43:16.250 00:43:16.250 NVM Command Set Attributes 00:43:16.250 ========================== 00:43:16.250 Submission Queue Entry Size 00:43:16.250 Max: 64 00:43:16.250 Min: 64 00:43:16.250 Completion Queue Entry Size 00:43:16.250 Max: 16 00:43:16.250 Min: 16 00:43:16.250 Number of Namespaces: 256 00:43:16.250 Compare Command: Supported 00:43:16.250 Write Uncorrectable Command: Not Supported 00:43:16.250 Dataset Management Command: Supported 00:43:16.250 Write Zeroes Command: Supported 00:43:16.250 Set Features Save Field: Supported 00:43:16.250 Reservations: Not Supported 00:43:16.250 Timestamp: Supported 00:43:16.250 Copy: Supported 00:43:16.250 Volatile Write Cache: Present 00:43:16.250 Atomic Write Unit (Normal): 1 00:43:16.250 Atomic Write Unit (PFail): 1 00:43:16.250 Atomic Compare & Write Unit: 1 00:43:16.250 Fused Compare & Write: Not Supported 00:43:16.250 Scatter-Gather List 00:43:16.250 SGL Command Set: Supported 00:43:16.250 SGL Keyed: Not Supported 00:43:16.250 SGL Bit Bucket Descriptor: Not Supported 00:43:16.250 SGL Metadata Pointer: Not Supported 00:43:16.250 Oversized SGL: Not Supported 00:43:16.250 SGL Metadata Address: Not Supported 00:43:16.250 SGL Offset: Not Supported 00:43:16.250 Transport SGL Data Block: Not Supported 00:43:16.250 Replay Protected Memory Block: Not Supported 00:43:16.250 00:43:16.250 Firmware Slot Information 00:43:16.250 ========================= 00:43:16.250 Active slot: 1 00:43:16.250 Slot 1 Firmware Revision: 1.0 00:43:16.250 00:43:16.250 00:43:16.250 Commands Supported and Effects 00:43:16.250 ============================== 00:43:16.250 Admin Commands 00:43:16.250 -------------- 00:43:16.250 Delete I/O Submission Queue (00h): Supported 00:43:16.250 Create I/O Submission Queue (01h): Supported 00:43:16.250 Get Log Page (02h): Supported 00:43:16.250 Delete I/O Completion Queue (04h): Supported 00:43:16.250 Create I/O Completion Queue (05h): Supported 00:43:16.250 Identify (06h): Supported 00:43:16.250 Abort (08h): Supported 00:43:16.250 Set Features (09h): Supported 00:43:16.250 Get Features (0Ah): Supported 00:43:16.250 Asynchronous Event Request (0Ch): Supported 00:43:16.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:16.250 Directive Send (19h): Supported 00:43:16.250 Directive Receive (1Ah): Supported 00:43:16.250 Virtualization Management (1Ch): Supported 00:43:16.250 Doorbell Buffer Config (7Ch): Supported 00:43:16.250 Format NVM (80h): Supported LBA-Change 00:43:16.250 I/O Commands 00:43:16.250 ------------ 00:43:16.250 Flush (00h): Supported LBA-Change 00:43:16.250 Write (01h): Supported LBA-Change 00:43:16.250 Read (02h): Supported 00:43:16.250 Compare (05h): Supported 00:43:16.250 Write Zeroes (08h): Supported LBA-Change 00:43:16.250 Dataset Management (09h): Supported LBA-Change 00:43:16.250 Unknown (0Ch): Supported 00:43:16.250 Unknown (12h): Supported 00:43:16.250 Copy (19h): Supported LBA-Change 00:43:16.250 Unknown (1Dh): Supported LBA-Change 00:43:16.250 00:43:16.250 Error Log 00:43:16.250 ========= 00:43:16.250 00:43:16.250 Arbitration 00:43:16.250 =========== 00:43:16.250 Arbitration Burst: no limit 00:43:16.250 00:43:16.250 Power Management 00:43:16.250 ================ 00:43:16.250 Number of Power States: 1 00:43:16.250 Current Power State: Power State #0 00:43:16.250 Power State #0: 00:43:16.251 Max Power: 25.00 W 00:43:16.251 Non-Operational State: Operational 00:43:16.251 Entry Latency: 16 microseconds 00:43:16.251 Exit Latency: 4 microseconds 00:43:16.251 Relative Read Throughput: 0 00:43:16.251 Relative Read Latency: 0 00:43:16.251 Relative Write Throughput: 0 00:43:16.251 Relative Write Latency: 0 00:43:16.251 Idle Power: Not Reported 00:43:16.251 Active Power: Not Reported 00:43:16.251 Non-Operational Permissive Mode: Not Supported 00:43:16.251 00:43:16.251 Health Information 00:43:16.251 ================== 00:43:16.251 Critical Warnings: 00:43:16.251 Available Spare Space: OK 00:43:16.251 Temperature: OK 00:43:16.251 Device Reliability: OK 00:43:16.251 Read Only: No 00:43:16.251 Volatile Memory Backup: OK 00:43:16.251 Current Temperature: 323 Kelvin (50 Celsius) 00:43:16.251 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:16.251 Available Spare: 0% 00:43:16.251 Available Spare Threshold: 0% 00:43:16.251 Life Percentage Used: 0% 00:43:16.251 Data Units Read: 4380 00:43:16.251 Data Units Written: 4038 00:43:16.251 Host Read Commands: 219728 00:43:16.251 Host Write Commands: 232726 00:43:16.251 Controller Busy Time: 0 minutes 00:43:16.251 Power Cycles: 0 00:43:16.251 Power On Hours: 0 hours 00:43:16.251 Unsafe Shutdowns: 0 00:43:16.251 Unrecoverable Media Errors: 0 00:43:16.251 Lifetime Error Log Entries: 0 00:43:16.251 Warning Temperature Time: 0 minutes 00:43:16.251 Critical Temperature Time: 0 minutes 00:43:16.251 00:43:16.251 Number of Queues 00:43:16.251 ================ 00:43:16.251 Number of I/O Submission Queues: 64 00:43:16.251 Number of I/O Completion Queues: 64 00:43:16.251 00:43:16.251 ZNS Specific Controller Data 00:43:16.251 ============================ 00:43:16.251 Zone Append Size Limit: 0 00:43:16.251 00:43:16.251 00:43:16.251 Active Namespaces 00:43:16.251 ================= 00:43:16.251 Namespace ID:1 00:43:16.251 Error Recovery Timeout: Unlimited 00:43:16.251 Command Set Identifier: NVM (00h) 00:43:16.251 Deallocate: Supported 00:43:16.251 Deallocated/Unwritten Error: Supported 00:43:16.251 Deallocated Read Value: All 0x00 00:43:16.251 Deallocate in Write Zeroes: Not Supported 00:43:16.251 Deallocated Guard Field: 0xFFFF 00:43:16.251 Flush: Supported 00:43:16.251 Reservation: Not Supported 00:43:16.251 Namespace Sharing Capabilities: Private 00:43:16.251 Size (in LBAs): 1310720 (5GiB) 00:43:16.251 Capacity (in LBAs): 1310720 (5GiB) 00:43:16.251 Utilization (in LBAs): 1310720 (5GiB) 00:43:16.251 Thin Provisioning: Not Supported 00:43:16.251 Per-NS Atomic Units: No 00:43:16.251 Maximum Single Source Range Length: 128 00:43:16.251 Maximum Copy Length: 128 00:43:16.251 Maximum Source Range Count: 128 00:43:16.251 NGUID/EUI64 Never Reused: No 00:43:16.251 Namespace Write Protected: No 00:43:16.251 Number of LBA Formats: 8 00:43:16.251 Current LBA Format: LBA Format #04 00:43:16.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:16.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:16.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:16.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:16.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:16.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:16.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:16.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:16.251 00:43:16.251 02:13:16 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:43:16.251 02:13:16 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:43:16.510 ===================================================== 00:43:16.510 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:16.510 ===================================================== 00:43:16.510 Controller Capabilities/Features 00:43:16.510 ================================ 00:43:16.510 Vendor ID: 1b36 00:43:16.510 Subsystem Vendor ID: 1af4 00:43:16.510 Serial Number: 12340 00:43:16.510 Model Number: QEMU NVMe Ctrl 00:43:16.510 Firmware Version: 8.0.0 00:43:16.510 Recommended Arb Burst: 6 00:43:16.510 IEEE OUI Identifier: 00 54 52 00:43:16.510 Multi-path I/O 00:43:16.510 May have multiple subsystem ports: No 00:43:16.510 May have multiple controllers: No 00:43:16.510 Associated with SR-IOV VF: No 00:43:16.510 Max Data Transfer Size: 524288 00:43:16.510 Max Number of Namespaces: 256 00:43:16.510 Max Number of I/O Queues: 64 00:43:16.510 NVMe Specification Version (VS): 1.4 00:43:16.510 NVMe Specification Version (Identify): 1.4 00:43:16.510 Maximum Queue Entries: 2048 00:43:16.510 Contiguous Queues Required: Yes 00:43:16.510 Arbitration Mechanisms Supported 00:43:16.510 Weighted Round Robin: Not Supported 00:43:16.510 Vendor Specific: Not Supported 00:43:16.510 Reset Timeout: 7500 ms 00:43:16.510 Doorbell Stride: 4 bytes 00:43:16.510 NVM Subsystem Reset: Not Supported 00:43:16.510 Command Sets Supported 00:43:16.510 NVM Command Set: Supported 00:43:16.510 Boot Partition: Not Supported 00:43:16.510 Memory Page Size Minimum: 4096 bytes 00:43:16.510 Memory Page Size Maximum: 65536 bytes 00:43:16.510 Persistent Memory Region: Not Supported 00:43:16.510 Optional Asynchronous Events Supported 00:43:16.510 Namespace Attribute Notices: Supported 00:43:16.510 Firmware Activation Notices: Not Supported 00:43:16.510 ANA Change Notices: Not Supported 00:43:16.510 PLE Aggregate Log Change Notices: Not Supported 00:43:16.510 LBA Status Info Alert Notices: Not Supported 00:43:16.510 EGE Aggregate Log Change Notices: Not Supported 00:43:16.510 Normal NVM Subsystem Shutdown event: Not Supported 00:43:16.510 Zone Descriptor Change Notices: Not Supported 00:43:16.510 Discovery Log Change Notices: Not Supported 00:43:16.510 Controller Attributes 00:43:16.510 128-bit Host Identifier: Not Supported 00:43:16.510 Non-Operational Permissive Mode: Not Supported 00:43:16.510 NVM Sets: Not Supported 00:43:16.510 Read Recovery Levels: Not Supported 00:43:16.510 Endurance Groups: Not Supported 00:43:16.510 Predictable Latency Mode: Not Supported 00:43:16.510 Traffic Based Keep ALive: Not Supported 00:43:16.510 Namespace Granularity: Not Supported 00:43:16.510 SQ Associations: Not Supported 00:43:16.510 UUID List: Not Supported 00:43:16.510 Multi-Domain Subsystem: Not Supported 00:43:16.510 Fixed Capacity Management: Not Supported 00:43:16.511 Variable Capacity Management: Not Supported 00:43:16.511 Delete Endurance Group: Not Supported 00:43:16.511 Delete NVM Set: Not Supported 00:43:16.511 Extended LBA Formats Supported: Supported 00:43:16.511 Flexible Data Placement Supported: Not Supported 00:43:16.511 00:43:16.511 Controller Memory Buffer Support 00:43:16.511 ================================ 00:43:16.511 Supported: No 00:43:16.511 00:43:16.511 Persistent Memory Region Support 00:43:16.511 ================================ 00:43:16.511 Supported: No 00:43:16.511 00:43:16.511 Admin Command Set Attributes 00:43:16.511 ============================ 00:43:16.511 Security Send/Receive: Not Supported 00:43:16.511 Format NVM: Supported 00:43:16.511 Firmware Activate/Download: Not Supported 00:43:16.511 Namespace Management: Supported 00:43:16.511 Device Self-Test: Not Supported 00:43:16.511 Directives: Supported 00:43:16.511 NVMe-MI: Not Supported 00:43:16.511 Virtualization Management: Not Supported 00:43:16.511 Doorbell Buffer Config: Supported 00:43:16.511 Get LBA Status Capability: Not Supported 00:43:16.511 Command & Feature Lockdown Capability: Not Supported 00:43:16.511 Abort Command Limit: 4 00:43:16.511 Async Event Request Limit: 4 00:43:16.511 Number of Firmware Slots: N/A 00:43:16.511 Firmware Slot 1 Read-Only: N/A 00:43:16.511 Firmware Activation Without Reset: N/A 00:43:16.511 Multiple Update Detection Support: N/A 00:43:16.511 Firmware Update Granularity: No Information Provided 00:43:16.511 Per-Namespace SMART Log: Yes 00:43:16.511 Asymmetric Namespace Access Log Page: Not Supported 00:43:16.511 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:16.511 Command Effects Log Page: Supported 00:43:16.511 Get Log Page Extended Data: Supported 00:43:16.511 Telemetry Log Pages: Not Supported 00:43:16.511 Persistent Event Log Pages: Not Supported 00:43:16.511 Supported Log Pages Log Page: May Support 00:43:16.511 Commands Supported & Effects Log Page: Not Supported 00:43:16.511 Feature Identifiers & Effects Log Page:May Support 00:43:16.511 NVMe-MI Commands & Effects Log Page: May Support 00:43:16.511 Data Area 4 for Telemetry Log: Not Supported 00:43:16.511 Error Log Page Entries Supported: 1 00:43:16.511 Keep Alive: Not Supported 00:43:16.511 00:43:16.511 NVM Command Set Attributes 00:43:16.511 ========================== 00:43:16.511 Submission Queue Entry Size 00:43:16.511 Max: 64 00:43:16.511 Min: 64 00:43:16.511 Completion Queue Entry Size 00:43:16.511 Max: 16 00:43:16.511 Min: 16 00:43:16.511 Number of Namespaces: 256 00:43:16.511 Compare Command: Supported 00:43:16.511 Write Uncorrectable Command: Not Supported 00:43:16.511 Dataset Management Command: Supported 00:43:16.511 Write Zeroes Command: Supported 00:43:16.511 Set Features Save Field: Supported 00:43:16.511 Reservations: Not Supported 00:43:16.511 Timestamp: Supported 00:43:16.511 Copy: Supported 00:43:16.511 Volatile Write Cache: Present 00:43:16.511 Atomic Write Unit (Normal): 1 00:43:16.511 Atomic Write Unit (PFail): 1 00:43:16.511 Atomic Compare & Write Unit: 1 00:43:16.511 Fused Compare & Write: Not Supported 00:43:16.511 Scatter-Gather List 00:43:16.511 SGL Command Set: Supported 00:43:16.511 SGL Keyed: Not Supported 00:43:16.511 SGL Bit Bucket Descriptor: Not Supported 00:43:16.511 SGL Metadata Pointer: Not Supported 00:43:16.511 Oversized SGL: Not Supported 00:43:16.511 SGL Metadata Address: Not Supported 00:43:16.511 SGL Offset: Not Supported 00:43:16.511 Transport SGL Data Block: Not Supported 00:43:16.511 Replay Protected Memory Block: Not Supported 00:43:16.511 00:43:16.511 Firmware Slot Information 00:43:16.511 ========================= 00:43:16.511 Active slot: 1 00:43:16.511 Slot 1 Firmware Revision: 1.0 00:43:16.511 00:43:16.511 00:43:16.511 Commands Supported and Effects 00:43:16.511 ============================== 00:43:16.511 Admin Commands 00:43:16.511 -------------- 00:43:16.511 Delete I/O Submission Queue (00h): Supported 00:43:16.511 Create I/O Submission Queue (01h): Supported 00:43:16.511 Get Log Page (02h): Supported 00:43:16.511 Delete I/O Completion Queue (04h): Supported 00:43:16.511 Create I/O Completion Queue (05h): Supported 00:43:16.511 Identify (06h): Supported 00:43:16.511 Abort (08h): Supported 00:43:16.511 Set Features (09h): Supported 00:43:16.511 Get Features (0Ah): Supported 00:43:16.511 Asynchronous Event Request (0Ch): Supported 00:43:16.511 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:16.511 Directive Send (19h): Supported 00:43:16.511 Directive Receive (1Ah): Supported 00:43:16.511 Virtualization Management (1Ch): Supported 00:43:16.511 Doorbell Buffer Config (7Ch): Supported 00:43:16.511 Format NVM (80h): Supported LBA-Change 00:43:16.511 I/O Commands 00:43:16.511 ------------ 00:43:16.511 Flush (00h): Supported LBA-Change 00:43:16.511 Write (01h): Supported LBA-Change 00:43:16.511 Read (02h): Supported 00:43:16.511 Compare (05h): Supported 00:43:16.511 Write Zeroes (08h): Supported LBA-Change 00:43:16.511 Dataset Management (09h): Supported LBA-Change 00:43:16.511 Unknown (0Ch): Supported 00:43:16.511 Unknown (12h): Supported 00:43:16.511 Copy (19h): Supported LBA-Change 00:43:16.511 Unknown (1Dh): Supported LBA-Change 00:43:16.511 00:43:16.511 Error Log 00:43:16.511 ========= 00:43:16.511 00:43:16.511 Arbitration 00:43:16.511 =========== 00:43:16.511 Arbitration Burst: no limit 00:43:16.511 00:43:16.511 Power Management 00:43:16.511 ================ 00:43:16.511 Number of Power States: 1 00:43:16.511 Current Power State: Power State #0 00:43:16.512 Power State #0: 00:43:16.512 Max Power: 25.00 W 00:43:16.512 Non-Operational State: Operational 00:43:16.512 Entry Latency: 16 microseconds 00:43:16.512 Exit Latency: 4 microseconds 00:43:16.512 Relative Read Throughput: 0 00:43:16.512 Relative Read Latency: 0 00:43:16.512 Relative Write Throughput: 0 00:43:16.512 Relative Write Latency: 0 00:43:16.770 Idle Power: Not Reported 00:43:16.770 Active Power: Not Reported 00:43:16.770 Non-Operational Permissive Mode: Not Supported 00:43:16.770 00:43:16.770 Health Information 00:43:16.770 ================== 00:43:16.770 Critical Warnings: 00:43:16.770 Available Spare Space: OK 00:43:16.770 Temperature: OK 00:43:16.770 Device Reliability: OK 00:43:16.770 Read Only: No 00:43:16.770 Volatile Memory Backup: OK 00:43:16.770 Current Temperature: 323 Kelvin (50 Celsius) 00:43:16.770 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:16.770 Available Spare: 0% 00:43:16.770 Available Spare Threshold: 0% 00:43:16.770 Life Percentage Used: 0% 00:43:16.770 Data Units Read: 4380 00:43:16.770 Data Units Written: 4038 00:43:16.770 Host Read Commands: 219728 00:43:16.770 Host Write Commands: 232726 00:43:16.770 Controller Busy Time: 0 minutes 00:43:16.770 Power Cycles: 0 00:43:16.770 Power On Hours: 0 hours 00:43:16.770 Unsafe Shutdowns: 0 00:43:16.770 Unrecoverable Media Errors: 0 00:43:16.770 Lifetime Error Log Entries: 0 00:43:16.770 Warning Temperature Time: 0 minutes 00:43:16.770 Critical Temperature Time: 0 minutes 00:43:16.770 00:43:16.770 Number of Queues 00:43:16.770 ================ 00:43:16.770 Number of I/O Submission Queues: 64 00:43:16.770 Number of I/O Completion Queues: 64 00:43:16.770 00:43:16.770 ZNS Specific Controller Data 00:43:16.770 ============================ 00:43:16.770 Zone Append Size Limit: 0 00:43:16.770 00:43:16.770 00:43:16.770 Active Namespaces 00:43:16.770 ================= 00:43:16.770 Namespace ID:1 00:43:16.770 Error Recovery Timeout: Unlimited 00:43:16.770 Command Set Identifier: NVM (00h) 00:43:16.770 Deallocate: Supported 00:43:16.770 Deallocated/Unwritten Error: Supported 00:43:16.770 Deallocated Read Value: All 0x00 00:43:16.770 Deallocate in Write Zeroes: Not Supported 00:43:16.770 Deallocated Guard Field: 0xFFFF 00:43:16.770 Flush: Supported 00:43:16.770 Reservation: Not Supported 00:43:16.770 Namespace Sharing Capabilities: Private 00:43:16.770 Size (in LBAs): 1310720 (5GiB) 00:43:16.770 Capacity (in LBAs): 1310720 (5GiB) 00:43:16.770 Utilization (in LBAs): 1310720 (5GiB) 00:43:16.770 Thin Provisioning: Not Supported 00:43:16.770 Per-NS Atomic Units: No 00:43:16.770 Maximum Single Source Range Length: 128 00:43:16.770 Maximum Copy Length: 128 00:43:16.770 Maximum Source Range Count: 128 00:43:16.770 NGUID/EUI64 Never Reused: No 00:43:16.770 Namespace Write Protected: No 00:43:16.770 Number of LBA Formats: 8 00:43:16.770 Current LBA Format: LBA Format #04 00:43:16.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:16.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:16.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:16.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:16.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:16.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:16.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:16.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:16.770 00:43:16.770 00:43:16.770 real 0m0.755s 00:43:16.770 user 0m0.334s 00:43:16.770 sys 0m0.307s 00:43:16.770 02:13:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:16.770 02:13:16 -- common/autotest_common.sh@10 -- # set +x 00:43:16.770 ************************************ 00:43:16.770 END TEST nvme_identify 00:43:16.770 ************************************ 00:43:16.770 02:13:16 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:43:16.770 02:13:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:16.770 02:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:16.770 02:13:16 -- common/autotest_common.sh@10 -- # set +x 00:43:16.770 ************************************ 00:43:16.770 START TEST nvme_perf 00:43:16.770 ************************************ 00:43:16.770 02:13:16 -- common/autotest_common.sh@1111 -- # nvme_perf 00:43:16.770 02:13:16 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:43:18.156 Initializing NVMe Controllers 00:43:18.156 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:18.156 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:18.156 Initialization complete. Launching workers. 00:43:18.156 ======================================================== 00:43:18.156 Latency(us) 00:43:18.156 Device Information : IOPS MiB/s Average min max 00:43:18.156 PCIE (0000:00:10.0) NSID 1 from core 0: 79322.88 929.56 1612.07 606.32 8408.75 00:43:18.156 ======================================================== 00:43:18.156 Total : 79322.88 929.56 1612.07 606.32 8408.75 00:43:18.156 00:43:18.156 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:18.156 ================================================================================= 00:43:18.156 1.00000% : 776.290us 00:43:18.156 10.00000% : 1045.455us 00:43:18.156 25.00000% : 1248.305us 00:43:18.156 50.00000% : 1560.381us 00:43:18.156 75.00000% : 1872.457us 00:43:18.156 90.00000% : 2168.930us 00:43:18.156 95.00000% : 2543.421us 00:43:18.156 98.00000% : 2980.328us 00:43:18.156 99.00000% : 3386.027us 00:43:18.156 99.50000% : 3978.971us 00:43:18.156 99.90000% : 6085.486us 00:43:18.156 99.99000% : 8113.981us 00:43:18.156 99.99900% : 8426.057us 00:43:18.156 99.99990% : 8426.057us 00:43:18.156 99.99999% : 8426.057us 00:43:18.156 00:43:18.156 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:18.156 ============================================================================== 00:43:18.156 Range in us Cumulative IO count 00:43:18.156 604.648 - 608.549: 0.0013% ( 1) 00:43:18.156 620.251 - 624.152: 0.0025% ( 1) 00:43:18.156 628.053 - 631.954: 0.0038% ( 1) 00:43:18.156 631.954 - 635.855: 0.0050% ( 1) 00:43:18.156 635.855 - 639.756: 0.0063% ( 1) 00:43:18.156 639.756 - 643.657: 0.0076% ( 1) 00:43:18.156 643.657 - 647.558: 0.0113% ( 3) 00:43:18.156 647.558 - 651.459: 0.0139% ( 2) 00:43:18.156 651.459 - 655.360: 0.0202% ( 5) 00:43:18.156 655.360 - 659.261: 0.0239% ( 3) 00:43:18.156 663.162 - 667.063: 0.0328% ( 7) 00:43:18.156 667.063 - 670.964: 0.0441% ( 9) 00:43:18.156 670.964 - 674.865: 0.0643% ( 16) 00:43:18.156 674.865 - 678.766: 0.0743% ( 8) 00:43:18.156 678.766 - 682.667: 0.0857% ( 9) 00:43:18.156 682.667 - 686.568: 0.1046% ( 15) 00:43:18.156 686.568 - 690.469: 0.1210% ( 13) 00:43:18.156 690.469 - 694.370: 0.1424% ( 17) 00:43:18.156 694.370 - 698.270: 0.1575% ( 12) 00:43:18.156 698.270 - 702.171: 0.2004% ( 34) 00:43:18.156 702.171 - 706.072: 0.2256% ( 20) 00:43:18.156 706.072 - 709.973: 0.2608% ( 28) 00:43:18.156 709.973 - 713.874: 0.2860% ( 20) 00:43:18.156 713.874 - 717.775: 0.3112% ( 20) 00:43:18.156 717.775 - 721.676: 0.3465% ( 28) 00:43:18.156 721.676 - 725.577: 0.3957% ( 39) 00:43:18.156 725.577 - 729.478: 0.4221% ( 21) 00:43:18.156 729.478 - 733.379: 0.4612% ( 31) 00:43:18.156 733.379 - 737.280: 0.4889% ( 22) 00:43:18.156 737.280 - 741.181: 0.5431% ( 43) 00:43:18.156 741.181 - 745.082: 0.5935% ( 40) 00:43:18.156 745.082 - 748.983: 0.6477% ( 43) 00:43:18.156 748.983 - 752.884: 0.6930% ( 36) 00:43:18.156 752.884 - 756.785: 0.7409% ( 38) 00:43:18.156 756.785 - 760.686: 0.7901% ( 39) 00:43:18.156 760.686 - 764.587: 0.8455% ( 44) 00:43:18.156 764.587 - 768.488: 0.9010% ( 44) 00:43:18.156 768.488 - 772.389: 0.9640% ( 50) 00:43:18.156 772.389 - 776.290: 1.0433% ( 63) 00:43:18.156 776.290 - 780.190: 1.1001% ( 45) 00:43:18.156 780.190 - 784.091: 1.1694% ( 55) 00:43:18.156 784.091 - 787.992: 1.2324% ( 50) 00:43:18.156 787.992 - 791.893: 1.3143% ( 65) 00:43:18.156 791.893 - 795.794: 1.3735% ( 47) 00:43:18.156 795.794 - 799.695: 1.4579% ( 67) 00:43:18.156 799.695 - 803.596: 1.5323% ( 59) 00:43:18.156 803.596 - 807.497: 1.6242% ( 73) 00:43:18.156 807.497 - 811.398: 1.7200% ( 76) 00:43:18.156 811.398 - 815.299: 1.7956% ( 60) 00:43:18.156 815.299 - 819.200: 1.8674% ( 57) 00:43:18.156 819.200 - 823.101: 1.9569% ( 71) 00:43:18.156 823.101 - 827.002: 2.0464% ( 71) 00:43:18.156 827.002 - 830.903: 2.1283% ( 65) 00:43:18.156 830.903 - 834.804: 2.2190% ( 72) 00:43:18.156 834.804 - 838.705: 2.3311% ( 89) 00:43:18.156 838.705 - 842.606: 2.4194% ( 70) 00:43:18.156 842.606 - 846.507: 2.5038% ( 67) 00:43:18.156 846.507 - 850.408: 2.6008% ( 77) 00:43:18.156 850.408 - 854.309: 2.6953% ( 75) 00:43:18.156 854.309 - 858.210: 2.8062% ( 88) 00:43:18.156 858.210 - 862.110: 2.9108% ( 83) 00:43:18.156 862.110 - 866.011: 3.0280% ( 93) 00:43:18.156 866.011 - 869.912: 3.1111% ( 66) 00:43:18.156 869.912 - 873.813: 3.2157% ( 83) 00:43:18.156 873.813 - 877.714: 3.3317% ( 92) 00:43:18.156 877.714 - 881.615: 3.4614% ( 103) 00:43:18.156 881.615 - 885.516: 3.5534% ( 73) 00:43:18.156 885.516 - 889.417: 3.6467% ( 74) 00:43:18.156 889.417 - 893.318: 3.7500% ( 82) 00:43:18.156 893.318 - 897.219: 3.8684% ( 94) 00:43:18.156 897.219 - 901.120: 3.9819% ( 90) 00:43:18.156 901.120 - 905.021: 4.1028% ( 96) 00:43:18.156 905.021 - 908.922: 4.2175% ( 91) 00:43:18.156 908.922 - 912.823: 4.3448% ( 101) 00:43:18.156 912.823 - 916.724: 4.4682% ( 98) 00:43:18.156 916.724 - 920.625: 4.5930% ( 99) 00:43:18.156 920.625 - 924.526: 4.7228% ( 103) 00:43:18.156 924.526 - 928.427: 4.8526% ( 103) 00:43:18.156 928.427 - 932.328: 5.0025% ( 119) 00:43:18.156 932.328 - 936.229: 5.1323% ( 103) 00:43:18.156 936.229 - 940.130: 5.2697% ( 109) 00:43:18.156 940.130 - 944.030: 5.4070% ( 109) 00:43:18.156 944.030 - 947.931: 5.5658% ( 126) 00:43:18.156 947.931 - 951.832: 5.7145% ( 118) 00:43:18.156 951.832 - 955.733: 5.8657% ( 120) 00:43:18.156 955.733 - 959.634: 5.9942% ( 102) 00:43:18.156 959.634 - 963.535: 6.1555% ( 128) 00:43:18.156 963.535 - 967.436: 6.3143% ( 126) 00:43:18.156 967.436 - 971.337: 6.4970% ( 145) 00:43:18.156 971.337 - 975.238: 6.6671% ( 135) 00:43:18.156 975.238 - 979.139: 6.8435% ( 140) 00:43:18.156 979.139 - 983.040: 7.0401% ( 156) 00:43:18.156 983.040 - 986.941: 7.2266% ( 148) 00:43:18.156 986.941 - 990.842: 7.3979% ( 136) 00:43:18.156 990.842 - 994.743: 7.5920% ( 154) 00:43:18.156 994.743 - 998.644: 7.7886% ( 156) 00:43:18.156 998.644 - 1006.446: 8.1893% ( 318) 00:43:18.156 1006.446 - 1014.248: 8.6051% ( 330) 00:43:18.156 1014.248 - 1022.050: 9.0675% ( 367) 00:43:18.156 1022.050 - 1029.851: 9.4909% ( 336) 00:43:18.156 1029.851 - 1037.653: 9.9735% ( 383) 00:43:18.156 1037.653 - 1045.455: 10.4284% ( 361) 00:43:18.156 1045.455 - 1053.257: 10.8959% ( 371) 00:43:18.156 1053.257 - 1061.059: 11.3974% ( 398) 00:43:18.156 1061.059 - 1068.861: 11.9065% ( 404) 00:43:18.156 1068.861 - 1076.663: 12.4257% ( 412) 00:43:18.156 1076.663 - 1084.465: 12.9486% ( 415) 00:43:18.156 1084.465 - 1092.267: 13.4866% ( 427) 00:43:18.156 1092.267 - 1100.069: 14.0184% ( 422) 00:43:18.156 1100.069 - 1107.870: 14.6018% ( 463) 00:43:18.156 1107.870 - 1115.672: 15.1210% ( 412) 00:43:18.156 1115.672 - 1123.474: 15.6893% ( 451) 00:43:18.156 1123.474 - 1131.276: 16.2311% ( 430) 00:43:18.156 1131.276 - 1139.078: 16.8359% ( 480) 00:43:18.156 1139.078 - 1146.880: 17.3891% ( 439) 00:43:18.156 1146.880 - 1154.682: 17.9725% ( 463) 00:43:18.156 1154.682 - 1162.484: 18.5559% ( 463) 00:43:18.156 1162.484 - 1170.286: 19.1343% ( 459) 00:43:18.156 1170.286 - 1178.088: 19.7303% ( 473) 00:43:18.156 1178.088 - 1185.890: 20.3503% ( 492) 00:43:18.156 1185.890 - 1193.691: 20.9110% ( 445) 00:43:18.156 1193.691 - 1201.493: 21.5272% ( 489) 00:43:18.156 1201.493 - 1209.295: 22.1132% ( 465) 00:43:18.156 1209.295 - 1217.097: 22.7344% ( 493) 00:43:18.156 1217.097 - 1224.899: 23.3153% ( 461) 00:43:18.156 1224.899 - 1232.701: 23.9604% ( 512) 00:43:18.156 1232.701 - 1240.503: 24.5376% ( 458) 00:43:18.157 1240.503 - 1248.305: 25.1852% ( 514) 00:43:18.157 1248.305 - 1256.107: 25.7724% ( 466) 00:43:18.157 1256.107 - 1263.909: 26.4226% ( 516) 00:43:18.157 1263.909 - 1271.710: 27.0073% ( 464) 00:43:18.157 1271.710 - 1279.512: 27.6247% ( 490) 00:43:18.157 1279.512 - 1287.314: 28.2649% ( 508) 00:43:18.157 1287.314 - 1295.116: 28.8546% ( 468) 00:43:18.157 1295.116 - 1302.918: 29.5388% ( 543) 00:43:18.157 1302.918 - 1310.720: 30.1310% ( 470) 00:43:18.157 1310.720 - 1318.522: 30.8014% ( 532) 00:43:18.157 1318.522 - 1326.324: 31.3735% ( 454) 00:43:18.157 1326.324 - 1334.126: 32.0577% ( 543) 00:43:18.157 1334.126 - 1341.928: 32.6373% ( 460) 00:43:18.157 1341.928 - 1349.730: 33.2989% ( 525) 00:43:18.157 1349.730 - 1357.531: 33.8886% ( 468) 00:43:18.157 1357.531 - 1365.333: 34.5640% ( 536) 00:43:18.157 1365.333 - 1373.135: 35.1575% ( 471) 00:43:18.157 1373.135 - 1380.937: 35.8165% ( 523) 00:43:18.157 1380.937 - 1388.739: 36.4352% ( 491) 00:43:18.157 1388.739 - 1396.541: 37.0754% ( 508) 00:43:18.157 1396.541 - 1404.343: 37.7142% ( 507) 00:43:18.157 1404.343 - 1412.145: 38.3354% ( 493) 00:43:18.157 1412.145 - 1419.947: 38.9831% ( 514) 00:43:18.157 1419.947 - 1427.749: 39.6069% ( 495) 00:43:18.157 1427.749 - 1435.550: 40.2671% ( 524) 00:43:18.157 1435.550 - 1443.352: 40.8543% ( 466) 00:43:18.157 1443.352 - 1451.154: 41.5348% ( 540) 00:43:18.157 1451.154 - 1458.956: 42.1182% ( 463) 00:43:18.157 1458.956 - 1466.758: 42.7886% ( 532) 00:43:18.157 1466.758 - 1474.560: 43.4123% ( 495) 00:43:18.157 1474.560 - 1482.362: 44.0524% ( 508) 00:43:18.157 1482.362 - 1490.164: 44.6913% ( 507) 00:43:18.157 1490.164 - 1497.966: 45.3289% ( 506) 00:43:18.157 1497.966 - 1505.768: 45.9665% ( 506) 00:43:18.157 1505.768 - 1513.570: 46.6230% ( 521) 00:43:18.157 1513.570 - 1521.371: 47.2581% ( 504) 00:43:18.157 1521.371 - 1529.173: 47.9095% ( 517) 00:43:18.157 1529.173 - 1536.975: 48.5559% ( 513) 00:43:18.157 1536.975 - 1544.777: 49.1847% ( 499) 00:43:18.157 1544.777 - 1552.579: 49.8601% ( 536) 00:43:18.157 1552.579 - 1560.381: 50.4776% ( 490) 00:43:18.157 1560.381 - 1568.183: 51.1681% ( 548) 00:43:18.157 1568.183 - 1575.985: 51.7742% ( 481) 00:43:18.157 1575.985 - 1583.787: 52.4534% ( 539) 00:43:18.157 1583.787 - 1591.589: 53.0620% ( 483) 00:43:18.157 1591.589 - 1599.390: 53.7475% ( 544) 00:43:18.157 1599.390 - 1607.192: 54.3775% ( 500) 00:43:18.157 1607.192 - 1614.994: 55.0592% ( 541) 00:43:18.157 1614.994 - 1622.796: 55.6905% ( 501) 00:43:18.157 1622.796 - 1630.598: 56.3722% ( 541) 00:43:18.157 1630.598 - 1638.400: 57.0073% ( 504) 00:43:18.157 1638.400 - 1646.202: 57.6802% ( 534) 00:43:18.157 1646.202 - 1654.004: 58.3354% ( 520) 00:43:18.157 1654.004 - 1661.806: 58.9869% ( 517) 00:43:18.157 1661.806 - 1669.608: 59.6321% ( 512) 00:43:18.157 1669.608 - 1677.410: 60.3037% ( 533) 00:43:18.157 1677.410 - 1685.211: 60.9803% ( 537) 00:43:18.157 1685.211 - 1693.013: 61.6205% ( 508) 00:43:18.157 1693.013 - 1700.815: 62.2908% ( 532) 00:43:18.157 1700.815 - 1708.617: 62.9398% ( 515) 00:43:18.157 1708.617 - 1716.419: 63.5900% ( 516) 00:43:18.157 1716.419 - 1724.221: 64.2591% ( 531) 00:43:18.157 1724.221 - 1732.023: 64.8891% ( 500) 00:43:18.157 1732.023 - 1739.825: 65.5418% ( 518) 00:43:18.157 1739.825 - 1747.627: 66.1744% ( 502) 00:43:18.157 1747.627 - 1755.429: 66.8284% ( 519) 00:43:18.157 1755.429 - 1763.230: 67.4420% ( 487) 00:43:18.157 1763.230 - 1771.032: 68.0670% ( 496) 00:43:18.157 1771.032 - 1778.834: 68.6883% ( 493) 00:43:18.157 1778.834 - 1786.636: 69.2994% ( 485) 00:43:18.157 1786.636 - 1794.438: 69.8715% ( 454) 00:43:18.157 1794.438 - 1802.240: 70.4738% ( 478) 00:43:18.157 1802.240 - 1810.042: 71.0396% ( 449) 00:43:18.157 1810.042 - 1817.844: 71.6583% ( 491) 00:43:18.157 1817.844 - 1825.646: 72.2039% ( 433) 00:43:18.157 1825.646 - 1833.448: 72.8075% ( 479) 00:43:18.157 1833.448 - 1841.250: 73.3090% ( 398) 00:43:18.157 1841.250 - 1849.051: 73.8962% ( 466) 00:43:18.157 1849.051 - 1856.853: 74.4103% ( 408) 00:43:18.157 1856.853 - 1864.655: 74.9597% ( 436) 00:43:18.157 1864.655 - 1872.457: 75.5028% ( 431) 00:43:18.157 1872.457 - 1880.259: 76.0156% ( 407) 00:43:18.157 1880.259 - 1888.061: 76.5499% ( 424) 00:43:18.157 1888.061 - 1895.863: 77.0577% ( 403) 00:43:18.157 1895.863 - 1903.665: 77.5958% ( 427) 00:43:18.157 1903.665 - 1911.467: 78.0784% ( 383) 00:43:18.157 1911.467 - 1919.269: 78.6328% ( 440) 00:43:18.157 1919.269 - 1927.070: 79.1079% ( 377) 00:43:18.157 1927.070 - 1934.872: 79.6384% ( 421) 00:43:18.157 1934.872 - 1942.674: 80.1235% ( 385) 00:43:18.157 1942.674 - 1950.476: 80.6313% ( 403) 00:43:18.157 1950.476 - 1958.278: 81.1139% ( 383) 00:43:18.157 1958.278 - 1966.080: 81.5801% ( 370) 00:43:18.157 1966.080 - 1973.882: 82.0539% ( 376) 00:43:18.157 1973.882 - 1981.684: 82.5176% ( 368) 00:43:18.157 1981.684 - 1989.486: 82.9624% ( 353) 00:43:18.157 1989.486 - 1997.288: 83.4299% ( 371) 00:43:18.157 1997.288 - 2012.891: 84.2704% ( 667) 00:43:18.157 2012.891 - 2028.495: 85.0794% ( 642) 00:43:18.157 2028.495 - 2044.099: 85.8380% ( 602) 00:43:18.157 2044.099 - 2059.703: 86.5197% ( 541) 00:43:18.157 2059.703 - 2075.307: 87.1648% ( 512) 00:43:18.157 2075.307 - 2090.910: 87.7508% ( 465) 00:43:18.157 2090.910 - 2106.514: 88.2926% ( 430) 00:43:18.157 2106.514 - 2122.118: 88.7966% ( 400) 00:43:18.157 2122.118 - 2137.722: 89.2351% ( 348) 00:43:18.157 2137.722 - 2153.326: 89.6535% ( 332) 00:43:18.157 2153.326 - 2168.930: 90.0416% ( 308) 00:43:18.157 2168.930 - 2184.533: 90.4032% ( 287) 00:43:18.157 2184.533 - 2200.137: 90.7371% ( 265) 00:43:18.157 2200.137 - 2215.741: 91.0270% ( 230) 00:43:18.157 2215.741 - 2231.345: 91.3143% ( 228) 00:43:18.157 2231.345 - 2246.949: 91.6053% ( 231) 00:43:18.157 2246.949 - 2262.552: 91.8611% ( 203) 00:43:18.157 2262.552 - 2278.156: 92.1043% ( 193) 00:43:18.157 2278.156 - 2293.760: 92.3425% ( 189) 00:43:18.157 2293.760 - 2309.364: 92.5668% ( 178) 00:43:18.157 2309.364 - 2324.968: 92.7797% ( 169) 00:43:18.157 2324.968 - 2340.571: 92.9688% ( 150) 00:43:18.157 2340.571 - 2356.175: 93.1615% ( 153) 00:43:18.157 2356.175 - 2371.779: 93.3455% ( 146) 00:43:18.157 2371.779 - 2387.383: 93.5282% ( 145) 00:43:18.157 2387.383 - 2402.987: 93.6920% ( 130) 00:43:18.157 2402.987 - 2418.590: 93.8672% ( 139) 00:43:18.157 2418.590 - 2434.194: 94.0360% ( 134) 00:43:18.157 2434.194 - 2449.798: 94.1923% ( 124) 00:43:18.157 2449.798 - 2465.402: 94.3460% ( 122) 00:43:18.157 2465.402 - 2481.006: 94.4960% ( 119) 00:43:18.157 2481.006 - 2496.610: 94.6585% ( 129) 00:43:18.157 2496.610 - 2512.213: 94.8097% ( 120) 00:43:18.157 2512.213 - 2527.817: 94.9647% ( 123) 00:43:18.157 2527.817 - 2543.421: 95.0983% ( 106) 00:43:18.157 2543.421 - 2559.025: 95.2470% ( 118) 00:43:18.157 2559.025 - 2574.629: 95.3856% ( 110) 00:43:18.157 2574.629 - 2590.232: 95.5242% ( 110) 00:43:18.157 2590.232 - 2605.836: 95.6653% ( 112) 00:43:18.157 2605.836 - 2621.440: 95.8027% ( 109) 00:43:18.157 2621.440 - 2637.044: 95.9438% ( 112) 00:43:18.157 2637.044 - 2652.648: 96.0786% ( 107) 00:43:18.157 2652.648 - 2668.251: 96.1996% ( 96) 00:43:18.157 2668.251 - 2683.855: 96.3256% ( 100) 00:43:18.157 2683.855 - 2699.459: 96.4466% ( 96) 00:43:18.157 2699.459 - 2715.063: 96.5587% ( 89) 00:43:18.157 2715.063 - 2730.667: 96.6696% ( 88) 00:43:18.157 2730.667 - 2746.270: 96.7818% ( 89) 00:43:18.157 2746.270 - 2761.874: 96.8838% ( 81) 00:43:18.157 2761.874 - 2777.478: 96.9745% ( 72) 00:43:18.157 2777.478 - 2793.082: 97.0678% ( 74) 00:43:18.157 2793.082 - 2808.686: 97.1699% ( 81) 00:43:18.157 2808.686 - 2824.290: 97.2631% ( 74) 00:43:18.157 2824.290 - 2839.893: 97.3450% ( 65) 00:43:18.157 2839.893 - 2855.497: 97.4307% ( 68) 00:43:18.157 2855.497 - 2871.101: 97.5176% ( 69) 00:43:18.157 2871.101 - 2886.705: 97.5995% ( 65) 00:43:18.157 2886.705 - 2902.309: 97.6764% ( 61) 00:43:18.157 2902.309 - 2917.912: 97.7508% ( 59) 00:43:18.157 2917.912 - 2933.516: 97.8238% ( 58) 00:43:18.157 2933.516 - 2949.120: 97.8894% ( 52) 00:43:18.157 2949.120 - 2964.724: 97.9498% ( 48) 00:43:18.157 2964.724 - 2980.328: 98.0166% ( 53) 00:43:18.157 2980.328 - 2995.931: 98.0771% ( 48) 00:43:18.157 2995.931 - 3011.535: 98.1250% ( 38) 00:43:18.157 3011.535 - 3027.139: 98.1842% ( 47) 00:43:18.157 3027.139 - 3042.743: 98.2308% ( 37) 00:43:18.157 3042.743 - 3058.347: 98.2812% ( 40) 00:43:18.157 3058.347 - 3073.950: 98.3291% ( 38) 00:43:18.157 3073.950 - 3089.554: 98.3795% ( 40) 00:43:18.157 3089.554 - 3105.158: 98.4224% ( 34) 00:43:18.157 3105.158 - 3120.762: 98.4703% ( 38) 00:43:18.157 3120.762 - 3136.366: 98.5118% ( 33) 00:43:18.157 3136.366 - 3151.970: 98.5534% ( 33) 00:43:18.157 3151.970 - 3167.573: 98.5925% ( 31) 00:43:18.157 3167.573 - 3183.177: 98.6391% ( 37) 00:43:18.157 3183.177 - 3198.781: 98.6719% ( 26) 00:43:18.157 3198.781 - 3214.385: 98.7122% ( 32) 00:43:18.157 3214.385 - 3229.989: 98.7487% ( 29) 00:43:18.157 3229.989 - 3245.592: 98.7865% ( 30) 00:43:18.157 3245.592 - 3261.196: 98.8130% ( 21) 00:43:18.157 3261.196 - 3276.800: 98.8470% ( 27) 00:43:18.157 3276.800 - 3292.404: 98.8747% ( 22) 00:43:18.157 3292.404 - 3308.008: 98.9025% ( 22) 00:43:18.157 3308.008 - 3323.611: 98.9264% ( 19) 00:43:18.157 3323.611 - 3339.215: 98.9516% ( 20) 00:43:18.157 3339.215 - 3354.819: 98.9718% ( 16) 00:43:18.157 3354.819 - 3370.423: 98.9932% ( 17) 00:43:18.157 3370.423 - 3386.027: 99.0159% ( 18) 00:43:18.157 3386.027 - 3401.630: 99.0360% ( 16) 00:43:18.157 3401.630 - 3417.234: 99.0549% ( 15) 00:43:18.157 3417.234 - 3432.838: 99.0738% ( 15) 00:43:18.157 3432.838 - 3448.442: 99.0927% ( 15) 00:43:18.158 3448.442 - 3464.046: 99.1066% ( 11) 00:43:18.158 3464.046 - 3479.650: 99.1205% ( 11) 00:43:18.158 3479.650 - 3495.253: 99.1356% ( 12) 00:43:18.158 3495.253 - 3510.857: 99.1457% ( 8) 00:43:18.158 3510.857 - 3526.461: 99.1620% ( 13) 00:43:18.158 3526.461 - 3542.065: 99.1759% ( 11) 00:43:18.158 3542.065 - 3557.669: 99.1885% ( 10) 00:43:18.158 3557.669 - 3573.272: 99.2049% ( 13) 00:43:18.158 3573.272 - 3588.876: 99.2175% ( 10) 00:43:18.158 3588.876 - 3604.480: 99.2326% ( 12) 00:43:18.158 3604.480 - 3620.084: 99.2414% ( 7) 00:43:18.158 3620.084 - 3635.688: 99.2528% ( 9) 00:43:18.158 3635.688 - 3651.291: 99.2641% ( 9) 00:43:18.158 3651.291 - 3666.895: 99.2767% ( 10) 00:43:18.158 3666.895 - 3682.499: 99.2893% ( 10) 00:43:18.158 3682.499 - 3698.103: 99.3032% ( 11) 00:43:18.158 3698.103 - 3713.707: 99.3120% ( 7) 00:43:18.158 3713.707 - 3729.310: 99.3246% ( 10) 00:43:18.158 3729.310 - 3744.914: 99.3385% ( 11) 00:43:18.158 3744.914 - 3760.518: 99.3511% ( 10) 00:43:18.158 3760.518 - 3776.122: 99.3649% ( 11) 00:43:18.158 3776.122 - 3791.726: 99.3775% ( 10) 00:43:18.158 3791.726 - 3807.330: 99.3914% ( 11) 00:43:18.158 3807.330 - 3822.933: 99.4015% ( 8) 00:43:18.158 3822.933 - 3838.537: 99.4141% ( 10) 00:43:18.158 3838.537 - 3854.141: 99.4267% ( 10) 00:43:18.158 3854.141 - 3869.745: 99.4342% ( 6) 00:43:18.158 3869.745 - 3885.349: 99.4468% ( 10) 00:43:18.158 3885.349 - 3900.952: 99.4556% ( 7) 00:43:18.158 3900.952 - 3916.556: 99.4670% ( 9) 00:43:18.158 3916.556 - 3932.160: 99.4758% ( 7) 00:43:18.158 3932.160 - 3947.764: 99.4871% ( 9) 00:43:18.158 3947.764 - 3963.368: 99.4972% ( 8) 00:43:18.158 3963.368 - 3978.971: 99.5060% ( 7) 00:43:18.158 3978.971 - 3994.575: 99.5174% ( 9) 00:43:18.158 3994.575 - 4025.783: 99.5325% ( 12) 00:43:18.158 4025.783 - 4056.990: 99.5476% ( 12) 00:43:18.158 4056.990 - 4088.198: 99.5615% ( 11) 00:43:18.158 4088.198 - 4119.406: 99.5766% ( 12) 00:43:18.158 4119.406 - 4150.613: 99.5905% ( 11) 00:43:18.158 4150.613 - 4181.821: 99.6031% ( 10) 00:43:18.158 4181.821 - 4213.029: 99.6182% ( 12) 00:43:18.158 4213.029 - 4244.236: 99.6321% ( 11) 00:43:18.158 4244.236 - 4275.444: 99.6459% ( 11) 00:43:18.158 4275.444 - 4306.651: 99.6585% ( 10) 00:43:18.158 4306.651 - 4337.859: 99.6673% ( 7) 00:43:18.158 4337.859 - 4369.067: 99.6762% ( 7) 00:43:18.158 4369.067 - 4400.274: 99.6837% ( 6) 00:43:18.158 4400.274 - 4431.482: 99.6913% ( 6) 00:43:18.158 4431.482 - 4462.690: 99.6976% ( 5) 00:43:18.158 4462.690 - 4493.897: 99.7014% ( 3) 00:43:18.158 4493.897 - 4525.105: 99.7051% ( 3) 00:43:18.158 4525.105 - 4556.312: 99.7077% ( 2) 00:43:18.158 4556.312 - 4587.520: 99.7102% ( 2) 00:43:18.158 4587.520 - 4618.728: 99.7140% ( 3) 00:43:18.158 4618.728 - 4649.935: 99.7165% ( 2) 00:43:18.158 4649.935 - 4681.143: 99.7203% ( 3) 00:43:18.158 4681.143 - 4712.350: 99.7240% ( 3) 00:43:18.158 4712.350 - 4743.558: 99.7278% ( 3) 00:43:18.158 4743.558 - 4774.766: 99.7291% ( 1) 00:43:18.158 4774.766 - 4805.973: 99.7303% ( 1) 00:43:18.158 4805.973 - 4837.181: 99.7316% ( 1) 00:43:18.158 4837.181 - 4868.389: 99.7329% ( 1) 00:43:18.158 4868.389 - 4899.596: 99.7341% ( 1) 00:43:18.158 4899.596 - 4930.804: 99.7354% ( 1) 00:43:18.158 4962.011 - 4993.219: 99.7366% ( 1) 00:43:18.158 4993.219 - 5024.427: 99.7379% ( 1) 00:43:18.158 5024.427 - 5055.634: 99.7392% ( 1) 00:43:18.158 5086.842 - 5118.050: 99.7429% ( 3) 00:43:18.158 5118.050 - 5149.257: 99.7480% ( 4) 00:43:18.158 5149.257 - 5180.465: 99.7505% ( 2) 00:43:18.158 5180.465 - 5211.672: 99.7581% ( 6) 00:43:18.158 5211.672 - 5242.880: 99.7631% ( 4) 00:43:18.158 5242.880 - 5274.088: 99.7681% ( 4) 00:43:18.158 5274.088 - 5305.295: 99.7719% ( 3) 00:43:18.158 5305.295 - 5336.503: 99.7782% ( 5) 00:43:18.158 5336.503 - 5367.710: 99.7845% ( 5) 00:43:18.158 5367.710 - 5398.918: 99.7883% ( 3) 00:43:18.158 5398.918 - 5430.126: 99.7933% ( 4) 00:43:18.158 5430.126 - 5461.333: 99.7996% ( 5) 00:43:18.158 5461.333 - 5492.541: 99.8047% ( 4) 00:43:18.158 5492.541 - 5523.749: 99.8097% ( 4) 00:43:18.158 5523.749 - 5554.956: 99.8148% ( 4) 00:43:18.158 5554.956 - 5586.164: 99.8211% ( 5) 00:43:18.158 5586.164 - 5617.371: 99.8274% ( 5) 00:43:18.158 5617.371 - 5648.579: 99.8324% ( 4) 00:43:18.158 5648.579 - 5679.787: 99.8374% ( 4) 00:43:18.158 5679.787 - 5710.994: 99.8412% ( 3) 00:43:18.158 5710.994 - 5742.202: 99.8463% ( 4) 00:43:18.158 5742.202 - 5773.410: 99.8513% ( 4) 00:43:18.158 5773.410 - 5804.617: 99.8576% ( 5) 00:43:18.158 5804.617 - 5835.825: 99.8601% ( 2) 00:43:18.158 5835.825 - 5867.032: 99.8664% ( 5) 00:43:18.158 5867.032 - 5898.240: 99.8715% ( 4) 00:43:18.158 5898.240 - 5929.448: 99.8765% ( 4) 00:43:18.158 5929.448 - 5960.655: 99.8828% ( 5) 00:43:18.158 5960.655 - 5991.863: 99.8866% ( 3) 00:43:18.158 5991.863 - 6023.070: 99.8929% ( 5) 00:43:18.158 6023.070 - 6054.278: 99.8992% ( 5) 00:43:18.158 6054.278 - 6085.486: 99.9042% ( 4) 00:43:18.158 6085.486 - 6116.693: 99.9105% ( 5) 00:43:18.158 6116.693 - 6147.901: 99.9143% ( 3) 00:43:18.158 6147.901 - 6179.109: 99.9206% ( 5) 00:43:18.158 6179.109 - 6210.316: 99.9269% ( 5) 00:43:18.158 6210.316 - 6241.524: 99.9282% ( 1) 00:43:18.158 6241.524 - 6272.731: 99.9320% ( 3) 00:43:18.158 6272.731 - 6303.939: 99.9357% ( 3) 00:43:18.158 6303.939 - 6335.147: 99.9370% ( 1) 00:43:18.158 6335.147 - 6366.354: 99.9383% ( 1) 00:43:18.158 6366.354 - 6397.562: 99.9395% ( 1) 00:43:18.158 6397.562 - 6428.770: 99.9408% ( 1) 00:43:18.158 6459.977 - 6491.185: 99.9420% ( 1) 00:43:18.158 6491.185 - 6522.392: 99.9433% ( 1) 00:43:18.158 6522.392 - 6553.600: 99.9446% ( 1) 00:43:18.158 6584.808 - 6616.015: 99.9458% ( 1) 00:43:18.158 6616.015 - 6647.223: 99.9471% ( 1) 00:43:18.158 6647.223 - 6678.430: 99.9483% ( 1) 00:43:18.158 6709.638 - 6740.846: 99.9496% ( 1) 00:43:18.158 6740.846 - 6772.053: 99.9509% ( 1) 00:43:18.158 6803.261 - 6834.469: 99.9521% ( 1) 00:43:18.158 6834.469 - 6865.676: 99.9534% ( 1) 00:43:18.158 6865.676 - 6896.884: 99.9546% ( 1) 00:43:18.158 6928.091 - 6959.299: 99.9559% ( 1) 00:43:18.158 6959.299 - 6990.507: 99.9572% ( 1) 00:43:18.158 6990.507 - 7021.714: 99.9584% ( 1) 00:43:18.158 7021.714 - 7052.922: 99.9597% ( 1) 00:43:18.158 7084.130 - 7115.337: 99.9609% ( 1) 00:43:18.158 7115.337 - 7146.545: 99.9622% ( 1) 00:43:18.158 7146.545 - 7177.752: 99.9635% ( 1) 00:43:18.158 7177.752 - 7208.960: 99.9647% ( 1) 00:43:18.158 7240.168 - 7271.375: 99.9660% ( 1) 00:43:18.158 7271.375 - 7302.583: 99.9672% ( 1) 00:43:18.158 7333.790 - 7364.998: 99.9685% ( 1) 00:43:18.158 7364.998 - 7396.206: 99.9698% ( 1) 00:43:18.158 7396.206 - 7427.413: 99.9710% ( 1) 00:43:18.158 7458.621 - 7489.829: 99.9723% ( 1) 00:43:18.158 7489.829 - 7521.036: 99.9735% ( 1) 00:43:18.158 7521.036 - 7552.244: 99.9748% ( 1) 00:43:18.158 7552.244 - 7583.451: 99.9761% ( 1) 00:43:18.158 7614.659 - 7645.867: 99.9773% ( 1) 00:43:18.158 7645.867 - 7677.074: 99.9786% ( 1) 00:43:18.158 7708.282 - 7739.490: 99.9798% ( 1) 00:43:18.158 7739.490 - 7770.697: 99.9811% ( 1) 00:43:18.158 7801.905 - 7833.112: 99.9824% ( 1) 00:43:18.158 7833.112 - 7864.320: 99.9836% ( 1) 00:43:18.158 7895.528 - 7926.735: 99.9849% ( 1) 00:43:18.158 7926.735 - 7957.943: 99.9861% ( 1) 00:43:18.158 7989.150 - 8051.566: 99.9887% ( 2) 00:43:18.158 8051.566 - 8113.981: 99.9912% ( 2) 00:43:18.158 8113.981 - 8176.396: 99.9924% ( 1) 00:43:18.158 8176.396 - 8238.811: 99.9950% ( 2) 00:43:18.158 8238.811 - 8301.227: 99.9975% ( 2) 00:43:18.158 8301.227 - 8363.642: 99.9987% ( 1) 00:43:18.158 8363.642 - 8426.057: 100.0000% ( 1) 00:43:18.158 00:43:18.158 02:13:18 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:43:19.535 Initializing NVMe Controllers 00:43:19.535 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:19.535 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:19.535 Initialization complete. Launching workers. 00:43:19.535 ======================================================== 00:43:19.535 Latency(us) 00:43:19.535 Device Information : IOPS MiB/s Average min max 00:43:19.535 PCIE (0000:00:10.0) NSID 1 from core 0: 65574.57 768.45 1950.34 863.90 7735.21 00:43:19.535 ======================================================== 00:43:19.535 Total : 65574.57 768.45 1950.34 863.90 7735.21 00:43:19.535 00:43:19.535 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:19.535 ================================================================================= 00:43:19.535 1.00000% : 1162.484us 00:43:19.535 10.00000% : 1380.937us 00:43:19.535 25.00000% : 1536.975us 00:43:19.535 50.00000% : 1888.061us 00:43:19.535 75.00000% : 2246.949us 00:43:19.535 90.00000% : 2559.025us 00:43:19.535 95.00000% : 2871.101us 00:43:19.535 98.00000% : 3432.838us 00:43:19.535 99.00000% : 3822.933us 00:43:19.535 99.50000% : 4306.651us 00:43:19.535 99.90000% : 5398.918us 00:43:19.535 99.99000% : 7521.036us 00:43:19.535 99.99900% : 7739.490us 00:43:19.535 99.99990% : 7739.490us 00:43:19.535 99.99999% : 7739.490us 00:43:19.535 00:43:19.535 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:19.535 ============================================================================== 00:43:19.535 Range in us Cumulative IO count 00:43:19.535 862.110 - 866.011: 0.0015% ( 1) 00:43:19.535 877.714 - 881.615: 0.0030% ( 1) 00:43:19.535 944.030 - 947.931: 0.0046% ( 1) 00:43:19.535 967.436 - 971.337: 0.0061% ( 1) 00:43:19.535 971.337 - 975.238: 0.0076% ( 1) 00:43:19.535 975.238 - 979.139: 0.0091% ( 1) 00:43:19.535 979.139 - 983.040: 0.0107% ( 1) 00:43:19.535 983.040 - 986.941: 0.0122% ( 1) 00:43:19.535 986.941 - 990.842: 0.0168% ( 3) 00:43:19.535 994.743 - 998.644: 0.0229% ( 4) 00:43:19.535 998.644 - 1006.446: 0.0274% ( 3) 00:43:19.535 1006.446 - 1014.248: 0.0351% ( 5) 00:43:19.535 1014.248 - 1022.050: 0.0503% ( 10) 00:43:19.535 1022.050 - 1029.851: 0.0625% ( 8) 00:43:19.535 1029.851 - 1037.653: 0.0717% ( 6) 00:43:19.535 1037.653 - 1045.455: 0.1021% ( 20) 00:43:19.535 1045.455 - 1053.257: 0.1281% ( 17) 00:43:19.535 1053.257 - 1061.059: 0.1494% ( 14) 00:43:19.535 1061.059 - 1068.861: 0.1753% ( 17) 00:43:19.535 1068.861 - 1076.663: 0.1982% ( 15) 00:43:19.535 1076.663 - 1084.465: 0.2348% ( 24) 00:43:19.535 1084.465 - 1092.267: 0.2897% ( 36) 00:43:19.535 1092.267 - 1100.069: 0.3308% ( 27) 00:43:19.535 1100.069 - 1107.870: 0.3857% ( 36) 00:43:19.535 1107.870 - 1115.672: 0.4436% ( 38) 00:43:19.535 1115.672 - 1123.474: 0.5122% ( 45) 00:43:19.535 1123.474 - 1131.276: 0.6007% ( 58) 00:43:19.535 1131.276 - 1139.078: 0.6982% ( 64) 00:43:19.535 1139.078 - 1146.880: 0.8294% ( 86) 00:43:19.535 1146.880 - 1154.682: 0.9589% ( 85) 00:43:19.535 1154.682 - 1162.484: 1.0946% ( 89) 00:43:19.535 1162.484 - 1170.286: 1.2288% ( 88) 00:43:19.535 1170.286 - 1178.088: 1.3873% ( 104) 00:43:19.535 1178.088 - 1185.890: 1.5489% ( 106) 00:43:19.535 1185.890 - 1193.691: 1.7258% ( 116) 00:43:19.535 1193.691 - 1201.493: 1.9209% ( 128) 00:43:19.535 1201.493 - 1209.295: 2.0841% ( 107) 00:43:19.535 1209.295 - 1217.097: 2.2945% ( 138) 00:43:19.535 1217.097 - 1224.899: 2.4942% ( 131) 00:43:19.535 1224.899 - 1232.701: 2.7533% ( 170) 00:43:19.535 1232.701 - 1240.503: 2.9881% ( 154) 00:43:19.535 1240.503 - 1248.305: 3.2519% ( 173) 00:43:19.535 1248.305 - 1256.107: 3.5095% ( 169) 00:43:19.535 1256.107 - 1263.909: 3.8114% ( 198) 00:43:19.535 1263.909 - 1271.710: 4.1148% ( 199) 00:43:19.535 1271.710 - 1279.512: 4.4044% ( 190) 00:43:19.535 1279.512 - 1287.314: 4.7368% ( 218) 00:43:19.535 1287.314 - 1295.116: 5.0874% ( 230) 00:43:19.535 1295.116 - 1302.918: 5.4289% ( 224) 00:43:19.535 1302.918 - 1310.720: 5.7811% ( 231) 00:43:19.535 1310.720 - 1318.522: 6.1866% ( 266) 00:43:19.535 1318.522 - 1326.324: 6.6349% ( 294) 00:43:19.535 1326.324 - 1334.126: 7.0602% ( 279) 00:43:19.535 1334.126 - 1341.928: 7.5252% ( 305) 00:43:19.535 1341.928 - 1349.730: 8.0268% ( 329) 00:43:19.535 1349.730 - 1357.531: 8.5832% ( 365) 00:43:19.535 1357.531 - 1365.333: 9.1184% ( 351) 00:43:19.535 1365.333 - 1373.135: 9.6946% ( 378) 00:43:19.535 1373.135 - 1380.937: 10.3243% ( 413) 00:43:19.535 1380.937 - 1388.739: 10.9676% ( 422) 00:43:19.535 1388.739 - 1396.541: 11.6598% ( 454) 00:43:19.535 1396.541 - 1404.343: 12.3504% ( 453) 00:43:19.535 1404.343 - 1412.145: 13.0929% ( 487) 00:43:19.535 1412.145 - 1419.947: 13.8689% ( 509) 00:43:19.535 1419.947 - 1427.749: 14.6098% ( 486) 00:43:19.535 1427.749 - 1435.550: 15.3888% ( 511) 00:43:19.535 1435.550 - 1443.352: 16.1725% ( 514) 00:43:19.535 1443.352 - 1451.154: 16.9424% ( 505) 00:43:19.535 1451.154 - 1458.956: 17.7687% ( 542) 00:43:19.535 1458.956 - 1466.758: 18.5233% ( 495) 00:43:19.535 1466.758 - 1474.560: 19.2566% ( 481) 00:43:19.535 1474.560 - 1482.362: 20.1104% ( 560) 00:43:19.535 1482.362 - 1490.164: 20.8757% ( 502) 00:43:19.535 1490.164 - 1497.966: 21.6746% ( 524) 00:43:19.535 1497.966 - 1505.768: 22.4750% ( 525) 00:43:19.535 1505.768 - 1513.570: 23.1961% ( 473) 00:43:19.535 1513.570 - 1521.371: 23.9446% ( 491) 00:43:19.535 1521.371 - 1529.173: 24.6642% ( 472) 00:43:19.535 1529.173 - 1536.975: 25.3884% ( 475) 00:43:19.535 1536.975 - 1544.777: 26.0546% ( 437) 00:43:19.535 1544.777 - 1552.579: 26.7346% ( 446) 00:43:19.535 1552.579 - 1560.381: 27.3840% ( 426) 00:43:19.535 1560.381 - 1568.183: 28.0868% ( 461) 00:43:19.535 1568.183 - 1575.985: 28.6936% ( 398) 00:43:19.535 1575.985 - 1583.787: 29.2851% ( 388) 00:43:19.535 1583.787 - 1591.589: 29.9072% ( 408) 00:43:19.535 1591.589 - 1599.390: 30.5185% ( 401) 00:43:19.535 1599.390 - 1607.192: 31.0978% ( 380) 00:43:19.535 1607.192 - 1614.994: 31.6650% ( 372) 00:43:19.535 1614.994 - 1622.796: 32.2001% ( 351) 00:43:19.535 1622.796 - 1630.598: 32.8160% ( 404) 00:43:19.535 1630.598 - 1638.400: 33.3694% ( 363) 00:43:19.535 1638.400 - 1646.202: 33.9045% ( 351) 00:43:19.535 1646.202 - 1654.004: 34.4366% ( 349) 00:43:19.535 1654.004 - 1661.806: 35.0175% ( 381) 00:43:19.535 1661.806 - 1669.608: 35.4916% ( 311) 00:43:19.535 1669.608 - 1677.410: 36.0572% ( 371) 00:43:19.535 1677.410 - 1685.211: 36.5405% ( 317) 00:43:19.535 1685.211 - 1693.013: 37.1030% ( 369) 00:43:19.535 1693.013 - 1700.815: 37.6290% ( 345) 00:43:19.535 1700.815 - 1708.617: 38.1504% ( 342) 00:43:19.535 1708.617 - 1716.419: 38.6855% ( 351) 00:43:19.535 1716.419 - 1724.221: 39.2191% ( 350) 00:43:19.535 1724.221 - 1732.023: 39.7344% ( 338) 00:43:19.535 1732.023 - 1739.825: 40.2345% ( 328) 00:43:19.535 1739.825 - 1747.627: 40.7940% ( 367) 00:43:19.535 1747.627 - 1755.429: 41.2895% ( 325) 00:43:19.535 1755.429 - 1763.230: 41.8017% ( 336) 00:43:19.535 1763.230 - 1771.032: 42.3399% ( 353) 00:43:19.535 1771.032 - 1778.834: 42.8552% ( 338) 00:43:19.535 1778.834 - 1786.636: 43.3933% ( 353) 00:43:19.535 1786.636 - 1794.438: 43.9026% ( 334) 00:43:19.535 1794.438 - 1802.240: 44.4194% ( 339) 00:43:19.535 1802.240 - 1810.042: 44.9423% ( 343) 00:43:19.535 1810.042 - 1817.844: 45.4759% ( 350) 00:43:19.535 1817.844 - 1825.646: 46.0080% ( 349) 00:43:19.535 1825.646 - 1833.448: 46.5080% ( 328) 00:43:19.535 1833.448 - 1841.250: 47.0569% ( 360) 00:43:19.535 1841.250 - 1849.051: 47.6042% ( 359) 00:43:19.535 1849.051 - 1856.853: 48.1637% ( 367) 00:43:19.535 1856.853 - 1864.655: 48.6957% ( 349) 00:43:19.535 1864.655 - 1872.457: 49.2232% ( 346) 00:43:19.535 1872.457 - 1880.259: 49.8193% ( 391) 00:43:19.535 1880.259 - 1888.061: 50.3682% ( 360) 00:43:19.535 1888.061 - 1895.863: 50.9292% ( 368) 00:43:19.535 1895.863 - 1903.665: 51.5116% ( 382) 00:43:19.535 1903.665 - 1911.467: 52.0742% ( 369) 00:43:19.535 1911.467 - 1919.269: 52.6382% ( 370) 00:43:19.535 1919.269 - 1927.070: 53.1886% ( 361) 00:43:19.535 1927.070 - 1934.872: 53.7496% ( 368) 00:43:19.535 1934.872 - 1942.674: 54.3259% ( 378) 00:43:19.535 1942.674 - 1950.476: 54.8854% ( 367) 00:43:19.535 1950.476 - 1958.278: 55.4449% ( 367) 00:43:19.535 1958.278 - 1966.080: 56.0075% ( 369) 00:43:19.535 1966.080 - 1973.882: 56.5502% ( 356) 00:43:19.535 1973.882 - 1981.684: 57.1326% ( 382) 00:43:19.535 1981.684 - 1989.486: 57.6449% ( 336) 00:43:19.535 1989.486 - 1997.288: 58.2364% ( 388) 00:43:19.535 1997.288 - 2012.891: 59.3417% ( 725) 00:43:19.535 2012.891 - 2028.495: 60.4714% ( 741) 00:43:19.535 2028.495 - 2044.099: 61.6240% ( 756) 00:43:19.535 2044.099 - 2059.703: 62.6972% ( 704) 00:43:19.535 2059.703 - 2075.307: 63.8025% ( 725) 00:43:19.535 2075.307 - 2090.910: 64.9185% ( 732) 00:43:19.535 2090.910 - 2106.514: 65.9903% ( 703) 00:43:19.535 2106.514 - 2122.118: 67.0178% ( 674) 00:43:19.535 2122.118 - 2137.722: 68.1033% ( 712) 00:43:19.535 2137.722 - 2153.326: 69.1857% ( 710) 00:43:19.535 2153.326 - 2168.930: 70.2087% ( 671) 00:43:19.535 2168.930 - 2184.533: 71.2561% ( 687) 00:43:19.535 2184.533 - 2200.137: 72.2699% ( 665) 00:43:19.535 2200.137 - 2215.741: 73.3218% ( 690) 00:43:19.535 2215.741 - 2231.345: 74.3067% ( 646) 00:43:19.535 2231.345 - 2246.949: 75.3053% ( 655) 00:43:19.535 2246.949 - 2262.552: 76.3069% ( 657) 00:43:19.535 2262.552 - 2278.156: 77.2445% ( 615) 00:43:19.536 2278.156 - 2293.760: 78.1791% ( 613) 00:43:19.536 2293.760 - 2309.364: 79.1304% ( 624) 00:43:19.536 2309.364 - 2324.968: 80.0497% ( 603) 00:43:19.536 2324.968 - 2340.571: 80.8638% ( 534) 00:43:19.536 2340.571 - 2356.175: 81.6993% ( 548) 00:43:19.536 2356.175 - 2371.779: 82.5393% ( 551) 00:43:19.536 2371.779 - 2387.383: 83.3214% ( 513) 00:43:19.536 2387.383 - 2402.987: 84.0699% ( 491) 00:43:19.536 2402.987 - 2418.590: 84.8155% ( 489) 00:43:19.536 2418.590 - 2434.194: 85.5106% ( 456) 00:43:19.536 2434.194 - 2449.798: 86.1982% ( 451) 00:43:19.536 2449.798 - 2465.402: 86.8462% ( 425) 00:43:19.536 2465.402 - 2481.006: 87.4804% ( 416) 00:43:19.536 2481.006 - 2496.610: 88.1192% ( 419) 00:43:19.536 2496.610 - 2512.213: 88.7198% ( 394) 00:43:19.536 2512.213 - 2527.817: 89.3190% ( 393) 00:43:19.536 2527.817 - 2543.421: 89.8709% ( 362) 00:43:19.536 2543.421 - 2559.025: 90.4182% ( 359) 00:43:19.536 2559.025 - 2574.629: 90.9350% ( 339) 00:43:19.536 2574.629 - 2590.232: 91.4015% ( 306) 00:43:19.536 2590.232 - 2605.836: 91.8482% ( 293) 00:43:19.536 2605.836 - 2621.440: 92.2400% ( 257) 00:43:19.536 2621.440 - 2637.044: 92.5617% ( 211) 00:43:19.536 2637.044 - 2652.648: 92.8346% ( 179) 00:43:19.536 2652.648 - 2668.251: 93.0724% ( 156) 00:43:19.536 2668.251 - 2683.855: 93.2600% ( 123) 00:43:19.536 2683.855 - 2699.459: 93.4581% ( 130) 00:43:19.536 2699.459 - 2715.063: 93.6350% ( 116) 00:43:19.536 2715.063 - 2730.667: 93.7874% ( 100) 00:43:19.536 2730.667 - 2746.270: 93.9490% ( 106) 00:43:19.536 2746.270 - 2761.874: 94.1091% ( 105) 00:43:19.536 2761.874 - 2777.478: 94.2616% ( 100) 00:43:19.536 2777.478 - 2793.082: 94.4034% ( 93) 00:43:19.536 2793.082 - 2808.686: 94.5543% ( 99) 00:43:19.536 2808.686 - 2824.290: 94.6991% ( 95) 00:43:19.536 2824.290 - 2839.893: 94.8424% ( 94) 00:43:19.536 2839.893 - 2855.497: 94.9675% ( 82) 00:43:19.536 2855.497 - 2871.101: 95.0955% ( 84) 00:43:19.536 2871.101 - 2886.705: 95.2266% ( 86) 00:43:19.536 2886.705 - 2902.309: 95.3410% ( 75) 00:43:19.536 2902.309 - 2917.912: 95.4462% ( 69) 00:43:19.536 2917.912 - 2933.516: 95.5483% ( 67) 00:43:19.536 2933.516 - 2949.120: 95.6459% ( 64) 00:43:19.536 2949.120 - 2964.724: 95.7511% ( 69) 00:43:19.536 2964.724 - 2980.328: 95.8441% ( 61) 00:43:19.536 2980.328 - 2995.931: 95.9340% ( 59) 00:43:19.536 2995.931 - 3011.535: 96.0331% ( 65) 00:43:19.536 3011.535 - 3027.139: 96.1231% ( 59) 00:43:19.536 3027.139 - 3042.743: 96.2161% ( 61) 00:43:19.536 3042.743 - 3058.347: 96.3167% ( 66) 00:43:19.536 3058.347 - 3073.950: 96.4066% ( 59) 00:43:19.536 3073.950 - 3089.554: 96.4813% ( 49) 00:43:19.536 3089.554 - 3105.158: 96.5698% ( 58) 00:43:19.536 3105.158 - 3120.762: 96.6582% ( 58) 00:43:19.536 3120.762 - 3136.366: 96.7375% ( 52) 00:43:19.536 3136.366 - 3151.970: 96.8122% ( 49) 00:43:19.536 3151.970 - 3167.573: 96.8914% ( 52) 00:43:19.536 3167.573 - 3183.177: 96.9753% ( 55) 00:43:19.536 3183.177 - 3198.781: 97.0500% ( 49) 00:43:19.536 3198.781 - 3214.385: 97.1216% ( 47) 00:43:19.536 3214.385 - 3229.989: 97.1979% ( 50) 00:43:19.536 3229.989 - 3245.592: 97.2680% ( 46) 00:43:19.536 3245.592 - 3261.196: 97.3442% ( 50) 00:43:19.536 3261.196 - 3276.800: 97.4006% ( 37) 00:43:19.536 3276.800 - 3292.404: 97.4692% ( 45) 00:43:19.536 3292.404 - 3308.008: 97.5378% ( 45) 00:43:19.536 3308.008 - 3323.611: 97.6049% ( 44) 00:43:19.536 3323.611 - 3339.215: 97.6735% ( 45) 00:43:19.536 3339.215 - 3354.819: 97.7437% ( 46) 00:43:19.536 3354.819 - 3370.423: 97.8062% ( 41) 00:43:19.536 3370.423 - 3386.027: 97.8687% ( 41) 00:43:19.536 3386.027 - 3401.630: 97.9297% ( 40) 00:43:19.536 3401.630 - 3417.234: 97.9815% ( 34) 00:43:19.536 3417.234 - 3432.838: 98.0394% ( 38) 00:43:19.536 3432.838 - 3448.442: 98.0958% ( 37) 00:43:19.536 3448.442 - 3464.046: 98.1477% ( 34) 00:43:19.536 3464.046 - 3479.650: 98.1995% ( 34) 00:43:19.536 3479.650 - 3495.253: 98.2498% ( 33) 00:43:19.536 3495.253 - 3510.857: 98.2971% ( 31) 00:43:19.536 3510.857 - 3526.461: 98.3489% ( 34) 00:43:19.536 3526.461 - 3542.065: 98.3962% ( 31) 00:43:19.536 3542.065 - 3557.669: 98.4404% ( 29) 00:43:19.536 3557.669 - 3573.272: 98.4831% ( 28) 00:43:19.536 3573.272 - 3588.876: 98.5242% ( 27) 00:43:19.536 3588.876 - 3604.480: 98.5669% ( 28) 00:43:19.536 3604.480 - 3620.084: 98.6066% ( 26) 00:43:19.536 3620.084 - 3635.688: 98.6431% ( 24) 00:43:19.536 3635.688 - 3651.291: 98.6782% ( 23) 00:43:19.536 3651.291 - 3666.895: 98.7148% ( 24) 00:43:19.536 3666.895 - 3682.499: 98.7468% ( 21) 00:43:19.536 3682.499 - 3698.103: 98.7819% ( 23) 00:43:19.536 3698.103 - 3713.707: 98.8139% ( 21) 00:43:19.536 3713.707 - 3729.310: 98.8413% ( 18) 00:43:19.536 3729.310 - 3744.914: 98.8703% ( 19) 00:43:19.536 3744.914 - 3760.518: 98.8993% ( 19) 00:43:19.536 3760.518 - 3776.122: 98.9313% ( 21) 00:43:19.536 3776.122 - 3791.726: 98.9618% ( 20) 00:43:19.536 3791.726 - 3807.330: 98.9892% ( 18) 00:43:19.536 3807.330 - 3822.933: 99.0121% ( 15) 00:43:19.536 3822.933 - 3838.537: 99.0426% ( 20) 00:43:19.536 3838.537 - 3854.141: 99.0715% ( 19) 00:43:19.536 3854.141 - 3869.745: 99.0975% ( 17) 00:43:19.536 3869.745 - 3885.349: 99.1234% ( 17) 00:43:19.536 3885.349 - 3900.952: 99.1432% ( 13) 00:43:19.536 3900.952 - 3916.556: 99.1630% ( 13) 00:43:19.536 3916.556 - 3932.160: 99.1828% ( 13) 00:43:19.536 3932.160 - 3947.764: 99.1981% ( 10) 00:43:19.536 3947.764 - 3963.368: 99.2164% ( 12) 00:43:19.536 3963.368 - 3978.971: 99.2377% ( 14) 00:43:19.536 3978.971 - 3994.575: 99.2560% ( 12) 00:43:19.536 3994.575 - 4025.783: 99.2911% ( 23) 00:43:19.536 4025.783 - 4056.990: 99.3170% ( 17) 00:43:19.536 4056.990 - 4088.198: 99.3460% ( 19) 00:43:19.536 4088.198 - 4119.406: 99.3704% ( 16) 00:43:19.536 4119.406 - 4150.613: 99.3948% ( 16) 00:43:19.536 4150.613 - 4181.821: 99.4176% ( 15) 00:43:19.536 4181.821 - 4213.029: 99.4420% ( 16) 00:43:19.536 4213.029 - 4244.236: 99.4649% ( 15) 00:43:19.536 4244.236 - 4275.444: 99.4878% ( 15) 00:43:19.536 4275.444 - 4306.651: 99.5076% ( 13) 00:43:19.536 4306.651 - 4337.859: 99.5289% ( 14) 00:43:19.536 4337.859 - 4369.067: 99.5487% ( 13) 00:43:19.536 4369.067 - 4400.274: 99.5701% ( 14) 00:43:19.536 4400.274 - 4431.482: 99.5899% ( 13) 00:43:19.536 4431.482 - 4462.690: 99.6112% ( 14) 00:43:19.536 4462.690 - 4493.897: 99.6295% ( 12) 00:43:19.536 4493.897 - 4525.105: 99.6478% ( 12) 00:43:19.536 4525.105 - 4556.312: 99.6631% ( 10) 00:43:19.536 4556.312 - 4587.520: 99.6768% ( 9) 00:43:19.536 4587.520 - 4618.728: 99.6920% ( 10) 00:43:19.536 4618.728 - 4649.935: 99.7058% ( 9) 00:43:19.536 4649.935 - 4681.143: 99.7210% ( 10) 00:43:19.536 4681.143 - 4712.350: 99.7347% ( 9) 00:43:19.536 4712.350 - 4743.558: 99.7484% ( 9) 00:43:19.536 4743.558 - 4774.766: 99.7637% ( 10) 00:43:19.536 4774.766 - 4805.973: 99.7789% ( 10) 00:43:19.536 4805.973 - 4837.181: 99.7896% ( 7) 00:43:19.536 4837.181 - 4868.389: 99.8003% ( 7) 00:43:19.536 4868.389 - 4899.596: 99.8079% ( 5) 00:43:19.536 4899.596 - 4930.804: 99.8186% ( 7) 00:43:19.536 4930.804 - 4962.011: 99.8277% ( 6) 00:43:19.536 4962.011 - 4993.219: 99.8353% ( 5) 00:43:19.536 4993.219 - 5024.427: 99.8460% ( 7) 00:43:19.536 5024.427 - 5055.634: 99.8536% ( 5) 00:43:19.536 5055.634 - 5086.842: 99.8582% ( 3) 00:43:19.536 5086.842 - 5118.050: 99.8658% ( 5) 00:43:19.536 5118.050 - 5149.257: 99.8704% ( 3) 00:43:19.536 5149.257 - 5180.465: 99.8750% ( 3) 00:43:19.536 5180.465 - 5211.672: 99.8780% ( 2) 00:43:19.536 5211.672 - 5242.880: 99.8826% ( 3) 00:43:19.536 5242.880 - 5274.088: 99.8857% ( 2) 00:43:19.536 5274.088 - 5305.295: 99.8902% ( 3) 00:43:19.536 5305.295 - 5336.503: 99.8948% ( 3) 00:43:19.536 5336.503 - 5367.710: 99.8994% ( 3) 00:43:19.536 5367.710 - 5398.918: 99.9009% ( 1) 00:43:19.536 5398.918 - 5430.126: 99.9024% ( 1) 00:43:19.536 5430.126 - 5461.333: 99.9040% ( 1) 00:43:19.536 5461.333 - 5492.541: 99.9055% ( 1) 00:43:19.536 5492.541 - 5523.749: 99.9070% ( 1) 00:43:19.536 5523.749 - 5554.956: 99.9085% ( 1) 00:43:19.536 5554.956 - 5586.164: 99.9101% ( 1) 00:43:19.536 5617.371 - 5648.579: 99.9116% ( 1) 00:43:19.536 5648.579 - 5679.787: 99.9131% ( 1) 00:43:19.536 5679.787 - 5710.994: 99.9146% ( 1) 00:43:19.536 5710.994 - 5742.202: 99.9161% ( 1) 00:43:19.536 5742.202 - 5773.410: 99.9177% ( 1) 00:43:19.536 5773.410 - 5804.617: 99.9192% ( 1) 00:43:19.536 5835.825 - 5867.032: 99.9207% ( 1) 00:43:19.536 5867.032 - 5898.240: 99.9222% ( 1) 00:43:19.536 5898.240 - 5929.448: 99.9238% ( 1) 00:43:19.536 5960.655 - 5991.863: 99.9253% ( 1) 00:43:19.536 5991.863 - 6023.070: 99.9268% ( 1) 00:43:19.536 6023.070 - 6054.278: 99.9283% ( 1) 00:43:19.536 6054.278 - 6085.486: 99.9299% ( 1) 00:43:19.536 6085.486 - 6116.693: 99.9314% ( 1) 00:43:19.536 6116.693 - 6147.901: 99.9329% ( 1) 00:43:19.536 6147.901 - 6179.109: 99.9344% ( 1) 00:43:19.536 6210.316 - 6241.524: 99.9360% ( 1) 00:43:19.536 6241.524 - 6272.731: 99.9375% ( 1) 00:43:19.536 6272.731 - 6303.939: 99.9390% ( 1) 00:43:19.536 6303.939 - 6335.147: 99.9405% ( 1) 00:43:19.536 6335.147 - 6366.354: 99.9421% ( 1) 00:43:19.536 6366.354 - 6397.562: 99.9436% ( 1) 00:43:19.536 6397.562 - 6428.770: 99.9451% ( 1) 00:43:19.536 6428.770 - 6459.977: 99.9466% ( 1) 00:43:19.536 6459.977 - 6491.185: 99.9482% ( 1) 00:43:19.536 6522.392 - 6553.600: 99.9497% ( 1) 00:43:19.537 6553.600 - 6584.808: 99.9512% ( 1) 00:43:19.537 6584.808 - 6616.015: 99.9527% ( 1) 00:43:19.537 6616.015 - 6647.223: 99.9543% ( 1) 00:43:19.537 6647.223 - 6678.430: 99.9558% ( 1) 00:43:19.537 6678.430 - 6709.638: 99.9573% ( 1) 00:43:19.537 6709.638 - 6740.846: 99.9588% ( 1) 00:43:19.537 6772.053 - 6803.261: 99.9604% ( 1) 00:43:19.537 6803.261 - 6834.469: 99.9619% ( 1) 00:43:19.537 6834.469 - 6865.676: 99.9634% ( 1) 00:43:19.537 6865.676 - 6896.884: 99.9649% ( 1) 00:43:19.537 6896.884 - 6928.091: 99.9665% ( 1) 00:43:19.537 6959.299 - 6990.507: 99.9680% ( 1) 00:43:19.537 6990.507 - 7021.714: 99.9695% ( 1) 00:43:19.537 7021.714 - 7052.922: 99.9710% ( 1) 00:43:19.537 7052.922 - 7084.130: 99.9726% ( 1) 00:43:19.537 7084.130 - 7115.337: 99.9741% ( 1) 00:43:19.537 7115.337 - 7146.545: 99.9756% ( 1) 00:43:19.537 7146.545 - 7177.752: 99.9771% ( 1) 00:43:19.537 7208.960 - 7240.168: 99.9787% ( 1) 00:43:19.537 7240.168 - 7271.375: 99.9802% ( 1) 00:43:19.537 7271.375 - 7302.583: 99.9817% ( 1) 00:43:19.537 7302.583 - 7333.790: 99.9832% ( 1) 00:43:19.537 7333.790 - 7364.998: 99.9848% ( 1) 00:43:19.537 7364.998 - 7396.206: 99.9863% ( 1) 00:43:19.537 7396.206 - 7427.413: 99.9878% ( 1) 00:43:19.537 7427.413 - 7458.621: 99.9893% ( 1) 00:43:19.537 7489.829 - 7521.036: 99.9909% ( 1) 00:43:19.537 7521.036 - 7552.244: 99.9924% ( 1) 00:43:19.537 7552.244 - 7583.451: 99.9939% ( 1) 00:43:19.537 7583.451 - 7614.659: 99.9954% ( 1) 00:43:19.537 7614.659 - 7645.867: 99.9970% ( 1) 00:43:19.537 7677.074 - 7708.282: 99.9985% ( 1) 00:43:19.537 7708.282 - 7739.490: 100.0000% ( 1) 00:43:19.537 00:43:19.537 02:13:19 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:43:19.537 00:43:19.537 real 0m2.701s 00:43:19.537 user 0m2.242s 00:43:19.537 sys 0m0.325s 00:43:19.537 02:13:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:19.537 02:13:19 -- common/autotest_common.sh@10 -- # set +x 00:43:19.537 ************************************ 00:43:19.537 END TEST nvme_perf 00:43:19.537 ************************************ 00:43:19.537 02:13:19 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:19.537 02:13:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:43:19.537 02:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:19.537 02:13:19 -- common/autotest_common.sh@10 -- # set +x 00:43:19.537 ************************************ 00:43:19.537 START TEST nvme_hello_world 00:43:19.537 ************************************ 00:43:19.537 02:13:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:20.105 Initializing NVMe Controllers 00:43:20.105 Attached to 0000:00:10.0 00:43:20.105 Namespace ID: 1 size: 5GB 00:43:20.105 Initialization complete. 00:43:20.105 INFO: using host memory buffer for IO 00:43:20.105 Hello world! 00:43:20.105 00:43:20.105 real 0m0.365s 00:43:20.105 user 0m0.131s 00:43:20.105 sys 0m0.163s 00:43:20.105 02:13:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:20.105 02:13:19 -- common/autotest_common.sh@10 -- # set +x 00:43:20.106 ************************************ 00:43:20.106 END TEST nvme_hello_world 00:43:20.106 ************************************ 00:43:20.106 02:13:19 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:20.106 02:13:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:20.106 02:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:20.106 02:13:19 -- common/autotest_common.sh@10 -- # set +x 00:43:20.106 ************************************ 00:43:20.106 START TEST nvme_sgl 00:43:20.106 ************************************ 00:43:20.106 02:13:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:20.367 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:43:20.367 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:43:20.367 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:43:20.367 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:43:20.367 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:43:20.367 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:43:20.367 NVMe Readv/Writev Request test 00:43:20.367 Attached to 0000:00:10.0 00:43:20.367 0000:00:10.0: build_io_request_2 test passed 00:43:20.367 0000:00:10.0: build_io_request_4 test passed 00:43:20.367 0000:00:10.0: build_io_request_5 test passed 00:43:20.367 0000:00:10.0: build_io_request_6 test passed 00:43:20.367 0000:00:10.0: build_io_request_7 test passed 00:43:20.367 0000:00:10.0: build_io_request_10 test passed 00:43:20.367 Cleaning up... 00:43:20.367 00:43:20.367 real 0m0.422s 00:43:20.367 user 0m0.162s 00:43:20.367 sys 0m0.191s 00:43:20.367 02:13:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:20.367 02:13:20 -- common/autotest_common.sh@10 -- # set +x 00:43:20.367 ************************************ 00:43:20.367 END TEST nvme_sgl 00:43:20.367 ************************************ 00:43:20.628 02:13:20 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:20.628 02:13:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:20.628 02:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:20.628 02:13:20 -- common/autotest_common.sh@10 -- # set +x 00:43:20.628 ************************************ 00:43:20.628 START TEST nvme_e2edp 00:43:20.628 ************************************ 00:43:20.628 02:13:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:20.886 NVMe Write/Read with End-to-End data protection test 00:43:20.886 Attached to 0000:00:10.0 00:43:20.886 Cleaning up... 00:43:20.886 00:43:20.886 real 0m0.400s 00:43:20.886 user 0m0.110s 00:43:20.886 sys 0m0.201s 00:43:20.886 02:13:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:20.886 02:13:20 -- common/autotest_common.sh@10 -- # set +x 00:43:20.886 ************************************ 00:43:20.886 END TEST nvme_e2edp 00:43:20.886 ************************************ 00:43:20.886 02:13:20 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:20.886 02:13:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:20.886 02:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:20.886 02:13:20 -- common/autotest_common.sh@10 -- # set +x 00:43:21.143 ************************************ 00:43:21.143 START TEST nvme_reserve 00:43:21.143 ************************************ 00:43:21.143 02:13:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:21.401 ===================================================== 00:43:21.401 NVMe Controller at PCI bus 0, device 16, function 0 00:43:21.401 ===================================================== 00:43:21.401 Reservations: Not Supported 00:43:21.401 Reservation test passed 00:43:21.401 00:43:21.401 real 0m0.365s 00:43:21.401 user 0m0.132s 00:43:21.401 sys 0m0.159s 00:43:21.401 02:13:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:21.401 02:13:21 -- common/autotest_common.sh@10 -- # set +x 00:43:21.401 ************************************ 00:43:21.401 END TEST nvme_reserve 00:43:21.401 ************************************ 00:43:21.401 02:13:21 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:21.401 02:13:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:21.401 02:13:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:21.401 02:13:21 -- common/autotest_common.sh@10 -- # set +x 00:43:21.401 ************************************ 00:43:21.401 START TEST nvme_err_injection 00:43:21.401 ************************************ 00:43:21.401 02:13:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:21.967 NVMe Error Injection test 00:43:21.967 Attached to 0000:00:10.0 00:43:21.967 0000:00:10.0: get features failed as expected 00:43:21.967 0000:00:10.0: get features successfully as expected 00:43:21.967 0000:00:10.0: read failed as expected 00:43:21.967 0000:00:10.0: read successfully as expected 00:43:21.967 Cleaning up... 00:43:21.967 00:43:21.967 real 0m0.372s 00:43:21.967 user 0m0.142s 00:43:21.967 sys 0m0.162s 00:43:21.967 02:13:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:21.967 02:13:21 -- common/autotest_common.sh@10 -- # set +x 00:43:21.967 ************************************ 00:43:21.967 END TEST nvme_err_injection 00:43:21.967 ************************************ 00:43:21.967 02:13:21 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:21.967 02:13:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:43:21.967 02:13:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:21.967 02:13:21 -- common/autotest_common.sh@10 -- # set +x 00:43:21.967 ************************************ 00:43:21.967 START TEST nvme_overhead 00:43:21.967 ************************************ 00:43:21.967 02:13:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:23.339 Initializing NVMe Controllers 00:43:23.339 Attached to 0000:00:10.0 00:43:23.339 Initialization complete. Launching workers. 00:43:23.339 submit (in ns) avg, min, max = 15504.3, 12173.3, 720505.7 00:43:23.339 complete (in ns) avg, min, max = 10281.3, 7822.9, 505785.7 00:43:23.339 00:43:23.339 Submit histogram 00:43:23.339 ================ 00:43:23.339 Range in us Cumulative Count 00:43:23.339 12.130 - 12.190: 0.0089% ( 1) 00:43:23.339 12.190 - 12.251: 0.0268% ( 2) 00:43:23.339 12.251 - 12.312: 0.0804% ( 6) 00:43:23.339 12.312 - 12.373: 0.1966% ( 13) 00:43:23.339 12.373 - 12.434: 0.2681% ( 8) 00:43:23.339 12.434 - 12.495: 0.3574% ( 10) 00:43:23.339 12.495 - 12.556: 0.4021% ( 5) 00:43:23.339 12.556 - 12.617: 0.4647% ( 7) 00:43:23.339 12.617 - 12.678: 0.4825% ( 2) 00:43:23.339 12.678 - 12.739: 0.5004% ( 2) 00:43:23.339 12.739 - 12.800: 0.5272% ( 3) 00:43:23.339 12.800 - 12.861: 0.5451% ( 2) 00:43:23.339 12.861 - 12.922: 0.5540% ( 1) 00:43:23.339 13.044 - 13.105: 0.5630% ( 1) 00:43:23.339 13.105 - 13.166: 1.1080% ( 61) 00:43:23.339 13.166 - 13.227: 2.9577% ( 207) 00:43:23.339 13.227 - 13.288: 6.7822% ( 428) 00:43:23.339 13.288 - 13.349: 12.0275% ( 587) 00:43:23.339 13.349 - 13.410: 16.3971% ( 489) 00:43:23.339 13.410 - 13.470: 19.6944% ( 369) 00:43:23.339 13.470 - 13.531: 21.7675% ( 232) 00:43:23.339 13.531 - 13.592: 23.1883% ( 159) 00:43:23.339 13.592 - 13.653: 24.2695% ( 121) 00:43:23.339 13.653 - 13.714: 25.0290% ( 85) 00:43:23.339 13.714 - 13.775: 25.4490% ( 47) 00:43:23.339 13.775 - 13.836: 25.7439% ( 33) 00:43:23.339 13.836 - 13.897: 26.0388% ( 33) 00:43:23.339 13.897 - 13.958: 27.2362% ( 134) 00:43:23.339 13.958 - 14.019: 30.7658% ( 395) 00:43:23.339 14.019 - 14.080: 36.9315% ( 690) 00:43:23.339 14.080 - 14.141: 44.6341% ( 862) 00:43:23.339 14.141 - 14.202: 51.3716% ( 754) 00:43:23.339 14.202 - 14.263: 56.0718% ( 526) 00:43:23.339 14.263 - 14.324: 59.3602% ( 368) 00:43:23.339 14.324 - 14.385: 61.3618% ( 224) 00:43:23.339 14.385 - 14.446: 63.2651% ( 213) 00:43:23.339 14.446 - 14.507: 65.2310% ( 220) 00:43:23.339 14.507 - 14.568: 66.9913% ( 197) 00:43:23.339 14.568 - 14.629: 68.6176% ( 182) 00:43:23.339 14.629 - 14.690: 69.6095% ( 111) 00:43:23.339 14.690 - 14.750: 70.5746% ( 108) 00:43:23.339 14.750 - 14.811: 71.2090% ( 71) 00:43:23.339 14.811 - 14.872: 71.7005% ( 55) 00:43:23.339 14.872 - 14.933: 72.1294% ( 48) 00:43:23.339 14.933 - 14.994: 72.5047% ( 42) 00:43:23.339 14.994 - 15.055: 72.8174% ( 35) 00:43:23.339 15.055 - 15.116: 73.0855% ( 30) 00:43:23.339 15.116 - 15.177: 73.3625% ( 31) 00:43:23.339 15.177 - 15.238: 73.6574% ( 33) 00:43:23.339 15.238 - 15.299: 73.9344% ( 31) 00:43:23.339 15.299 - 15.360: 74.1757% ( 27) 00:43:23.339 15.360 - 15.421: 74.4437% ( 30) 00:43:23.339 15.421 - 15.482: 74.6850% ( 27) 00:43:23.339 15.482 - 15.543: 74.8101% ( 14) 00:43:23.339 15.543 - 15.604: 74.9084% ( 11) 00:43:23.339 15.604 - 15.726: 75.0424% ( 15) 00:43:23.339 15.726 - 15.848: 75.1497% ( 12) 00:43:23.339 15.848 - 15.970: 75.1854% ( 4) 00:43:23.339 15.970 - 16.091: 75.2212% ( 4) 00:43:23.339 16.091 - 16.213: 75.3016% ( 9) 00:43:23.339 16.213 - 16.335: 75.3731% ( 8) 00:43:23.339 16.335 - 16.457: 75.4803% ( 12) 00:43:23.339 16.457 - 16.579: 75.6054% ( 14) 00:43:23.339 16.579 - 16.701: 75.7930% ( 21) 00:43:23.339 16.701 - 16.823: 75.9718% ( 20) 00:43:23.339 16.823 - 16.945: 76.4722% ( 56) 00:43:23.339 16.945 - 17.067: 76.8921% ( 47) 00:43:23.339 17.067 - 17.189: 77.0262% ( 15) 00:43:23.339 17.189 - 17.310: 77.1781% ( 17) 00:43:23.339 17.310 - 17.432: 77.2496% ( 8) 00:43:23.339 17.432 - 17.554: 77.4194% ( 19) 00:43:23.339 17.554 - 17.676: 77.5891% ( 19) 00:43:23.339 17.676 - 17.798: 77.7142% ( 14) 00:43:23.339 17.798 - 17.920: 77.8929% ( 20) 00:43:23.339 17.920 - 18.042: 78.0538% ( 18) 00:43:23.339 18.042 - 18.164: 78.2593% ( 23) 00:43:23.339 18.164 - 18.286: 78.4112% ( 17) 00:43:23.339 18.286 - 18.408: 78.6525% ( 27) 00:43:23.339 18.408 - 18.530: 79.5639% ( 102) 00:43:23.339 18.530 - 18.651: 83.7101% ( 464) 00:43:23.339 18.651 - 18.773: 88.5890% ( 546) 00:43:23.339 18.773 - 18.895: 91.0106% ( 271) 00:43:23.339 18.895 - 19.017: 92.4135% ( 157) 00:43:23.339 19.017 - 19.139: 93.3071% ( 100) 00:43:23.339 19.139 - 19.261: 93.9058% ( 67) 00:43:23.339 19.261 - 19.383: 94.4152% ( 57) 00:43:23.339 19.383 - 19.505: 94.8173% ( 45) 00:43:23.339 19.505 - 19.627: 95.0496% ( 26) 00:43:23.339 19.627 - 19.749: 95.3266% ( 31) 00:43:23.339 19.749 - 19.870: 95.4606% ( 15) 00:43:23.339 19.870 - 19.992: 95.6125% ( 17) 00:43:23.339 19.992 - 20.114: 95.6751% ( 7) 00:43:23.339 20.114 - 20.236: 95.7913% ( 13) 00:43:23.339 20.236 - 20.358: 95.8717% ( 9) 00:43:23.339 20.358 - 20.480: 95.9164% ( 5) 00:43:23.339 20.480 - 20.602: 95.9610% ( 5) 00:43:23.339 20.602 - 20.724: 96.0683% ( 12) 00:43:23.339 20.724 - 20.846: 96.1934% ( 14) 00:43:23.339 20.846 - 20.968: 96.2202% ( 3) 00:43:23.339 20.968 - 21.090: 96.3006% ( 9) 00:43:23.339 21.090 - 21.211: 96.3363% ( 4) 00:43:23.339 21.211 - 21.333: 96.3989% ( 7) 00:43:23.339 21.333 - 21.455: 96.4168% ( 2) 00:43:23.339 21.455 - 21.577: 96.4436% ( 3) 00:43:23.339 21.577 - 21.699: 96.5151% ( 8) 00:43:23.339 21.699 - 21.821: 96.5508% ( 4) 00:43:23.339 21.821 - 21.943: 96.5955% ( 5) 00:43:23.339 21.943 - 22.065: 96.6402% ( 5) 00:43:23.339 22.065 - 22.187: 96.6580% ( 2) 00:43:23.339 22.187 - 22.309: 96.7027% ( 5) 00:43:23.339 22.309 - 22.430: 96.7563% ( 6) 00:43:23.339 22.430 - 22.552: 96.8457% ( 10) 00:43:23.339 22.552 - 22.674: 96.9261% ( 9) 00:43:23.339 22.674 - 22.796: 97.0244% ( 11) 00:43:23.339 22.796 - 22.918: 97.1048% ( 9) 00:43:23.339 22.918 - 23.040: 97.1406% ( 4) 00:43:23.339 23.040 - 23.162: 97.1942% ( 6) 00:43:23.339 23.162 - 23.284: 97.2657% ( 8) 00:43:23.339 23.284 - 23.406: 97.2835% ( 2) 00:43:23.339 23.406 - 23.528: 97.3371% ( 6) 00:43:23.340 23.528 - 23.650: 97.3640% ( 3) 00:43:23.340 23.650 - 23.771: 97.3818% ( 2) 00:43:23.340 23.771 - 23.893: 97.4086% ( 3) 00:43:23.340 23.893 - 24.015: 97.4444% ( 4) 00:43:23.340 24.015 - 24.137: 97.4801% ( 4) 00:43:23.340 24.137 - 24.259: 97.5427% ( 7) 00:43:23.340 24.259 - 24.381: 97.5963% ( 6) 00:43:23.340 24.381 - 24.503: 97.6499% ( 6) 00:43:23.340 24.503 - 24.625: 97.7661% ( 13) 00:43:23.340 24.625 - 24.747: 97.8644% ( 11) 00:43:23.340 24.747 - 24.869: 97.9895% ( 14) 00:43:23.340 24.869 - 24.990: 98.1324% ( 16) 00:43:23.340 24.990 - 25.112: 98.3290% ( 22) 00:43:23.340 25.112 - 25.234: 98.5167% ( 21) 00:43:23.340 25.234 - 25.356: 98.6150% ( 11) 00:43:23.340 25.356 - 25.478: 98.6596% ( 5) 00:43:23.340 25.478 - 25.600: 98.7669% ( 12) 00:43:23.340 25.600 - 25.722: 98.8473% ( 9) 00:43:23.340 25.722 - 25.844: 98.8652% ( 2) 00:43:23.340 25.844 - 25.966: 98.8830% ( 2) 00:43:23.340 25.966 - 26.088: 98.8920% ( 1) 00:43:23.340 26.088 - 26.210: 98.9188% ( 3) 00:43:23.340 26.210 - 26.331: 98.9545% ( 4) 00:43:23.340 26.331 - 26.453: 98.9992% ( 5) 00:43:23.340 26.819 - 26.941: 99.0260% ( 3) 00:43:23.340 27.063 - 27.185: 99.0439% ( 2) 00:43:23.340 27.185 - 27.307: 99.0707% ( 3) 00:43:23.340 27.307 - 27.429: 99.0886% ( 2) 00:43:23.340 27.429 - 27.550: 99.0975% ( 1) 00:43:23.340 27.550 - 27.672: 99.1422% ( 5) 00:43:23.340 27.672 - 27.794: 99.1958% ( 6) 00:43:23.340 27.794 - 27.916: 99.2047% ( 1) 00:43:23.340 28.038 - 28.160: 99.2405% ( 4) 00:43:23.340 28.282 - 28.404: 99.2494% ( 1) 00:43:23.340 28.404 - 28.526: 99.2583% ( 1) 00:43:23.340 28.526 - 28.648: 99.2673% ( 1) 00:43:23.340 28.770 - 28.891: 99.2762% ( 1) 00:43:23.340 28.891 - 29.013: 99.2851% ( 1) 00:43:23.340 29.013 - 29.135: 99.2941% ( 1) 00:43:23.340 29.135 - 29.257: 99.3030% ( 1) 00:43:23.340 29.257 - 29.379: 99.3119% ( 1) 00:43:23.340 29.501 - 29.623: 99.3209% ( 1) 00:43:23.340 29.623 - 29.745: 99.3298% ( 1) 00:43:23.340 29.745 - 29.867: 99.3388% ( 1) 00:43:23.340 29.867 - 29.989: 99.3745% ( 4) 00:43:23.340 29.989 - 30.110: 99.4013% ( 3) 00:43:23.340 30.110 - 30.232: 99.4281% ( 3) 00:43:23.340 30.232 - 30.354: 99.4370% ( 1) 00:43:23.340 30.354 - 30.476: 99.4817% ( 5) 00:43:23.340 30.476 - 30.598: 99.5085% ( 3) 00:43:23.340 30.598 - 30.720: 99.5443% ( 4) 00:43:23.340 30.720 - 30.842: 99.5711% ( 3) 00:43:23.340 30.842 - 30.964: 99.6068% ( 4) 00:43:23.340 30.964 - 31.086: 99.6247% ( 2) 00:43:23.340 31.086 - 31.208: 99.6426% ( 2) 00:43:23.340 31.208 - 31.451: 99.6962% ( 6) 00:43:23.340 31.451 - 31.695: 99.7051% ( 1) 00:43:23.340 31.695 - 31.939: 99.7141% ( 1) 00:43:23.340 32.427 - 32.670: 99.7230% ( 1) 00:43:23.340 32.670 - 32.914: 99.7319% ( 1) 00:43:23.340 33.646 - 33.890: 99.7766% ( 5) 00:43:23.340 34.133 - 34.377: 99.7855% ( 1) 00:43:23.340 34.621 - 34.865: 99.7945% ( 1) 00:43:23.340 35.840 - 36.084: 99.8034% ( 1) 00:43:23.340 36.328 - 36.571: 99.8123% ( 1) 00:43:23.340 37.059 - 37.303: 99.8213% ( 1) 00:43:23.340 38.034 - 38.278: 99.8302% ( 1) 00:43:23.340 38.522 - 38.766: 99.8392% ( 1) 00:43:23.340 38.766 - 39.010: 99.8481% ( 1) 00:43:23.340 39.010 - 39.253: 99.8570% ( 1) 00:43:23.340 39.741 - 39.985: 99.8660% ( 1) 00:43:23.340 40.229 - 40.472: 99.8749% ( 1) 00:43:23.340 40.472 - 40.716: 99.8838% ( 1) 00:43:23.340 40.716 - 40.960: 99.8928% ( 1) 00:43:23.340 44.617 - 44.861: 99.9017% ( 1) 00:43:23.340 45.349 - 45.592: 99.9106% ( 1) 00:43:23.340 47.543 - 47.787: 99.9196% ( 1) 00:43:23.340 50.712 - 50.956: 99.9285% ( 1) 00:43:23.340 53.882 - 54.126: 99.9374% ( 1) 00:43:23.340 60.465 - 60.709: 99.9464% ( 1) 00:43:23.340 69.242 - 69.730: 99.9553% ( 1) 00:43:23.340 71.680 - 72.168: 99.9643% ( 1) 00:43:23.340 76.556 - 77.044: 99.9732% ( 1) 00:43:23.340 131.657 - 132.632: 99.9821% ( 1) 00:43:23.340 136.533 - 137.509: 99.9911% ( 1) 00:43:23.340 717.775 - 721.676: 100.0000% ( 1) 00:43:23.340 00:43:23.340 Complete histogram 00:43:23.340 ================== 00:43:23.340 Range in us Cumulative Count 00:43:23.340 7.802 - 7.863: 0.1072% ( 12) 00:43:23.340 7.863 - 7.924: 0.3485% ( 27) 00:43:23.340 7.924 - 7.985: 0.4647% ( 13) 00:43:23.340 7.985 - 8.046: 0.4736% ( 1) 00:43:23.340 8.046 - 8.107: 0.4825% ( 1) 00:43:23.340 8.107 - 8.168: 0.5004% ( 2) 00:43:23.340 8.168 - 8.229: 0.5093% ( 1) 00:43:23.340 8.229 - 8.290: 0.5183% ( 1) 00:43:23.340 8.411 - 8.472: 0.5451% ( 3) 00:43:23.340 8.472 - 8.533: 2.1893% ( 184) 00:43:23.340 8.533 - 8.594: 10.4012% ( 919) 00:43:23.340 8.594 - 8.655: 18.2736% ( 881) 00:43:23.340 8.655 - 8.716: 21.5173% ( 363) 00:43:23.340 8.716 - 8.777: 22.6611% ( 128) 00:43:23.340 8.777 - 8.838: 23.4653% ( 90) 00:43:23.340 8.838 - 8.899: 24.0997% ( 71) 00:43:23.340 8.899 - 8.960: 24.9844% ( 99) 00:43:23.340 8.960 - 9.021: 25.3239% ( 38) 00:43:23.340 9.021 - 9.082: 27.6562% ( 261) 00:43:23.340 9.082 - 9.143: 40.2645% ( 1411) 00:43:23.340 9.143 - 9.204: 53.8022% ( 1515) 00:43:23.340 9.204 - 9.265: 59.4317% ( 630) 00:43:23.340 9.265 - 9.326: 61.3618% ( 216) 00:43:23.340 9.326 - 9.387: 62.3626% ( 112) 00:43:23.340 9.387 - 9.448: 64.8021% ( 273) 00:43:23.340 9.448 - 9.509: 68.6266% ( 428) 00:43:23.340 9.509 - 9.570: 71.1554% ( 283) 00:43:23.340 9.570 - 9.630: 72.2992% ( 128) 00:43:23.340 9.630 - 9.691: 72.7906% ( 55) 00:43:23.340 9.691 - 9.752: 73.1391% ( 39) 00:43:23.340 9.752 - 9.813: 73.4161% ( 31) 00:43:23.340 9.813 - 9.874: 73.7646% ( 39) 00:43:23.340 9.874 - 9.935: 74.0774% ( 35) 00:43:23.340 9.935 - 9.996: 74.2561% ( 20) 00:43:23.340 9.996 - 10.057: 74.3186% ( 7) 00:43:23.340 10.057 - 10.118: 74.3633% ( 5) 00:43:23.340 10.118 - 10.179: 74.4259% ( 7) 00:43:23.340 10.179 - 10.240: 74.5420% ( 13) 00:43:23.340 10.240 - 10.301: 74.6135% ( 8) 00:43:23.340 10.301 - 10.362: 74.7029% ( 10) 00:43:23.340 10.362 - 10.423: 74.7476% ( 5) 00:43:23.340 10.423 - 10.484: 74.7922% ( 5) 00:43:23.340 10.484 - 10.545: 74.8727% ( 9) 00:43:23.340 10.545 - 10.606: 74.9710% ( 11) 00:43:23.340 10.606 - 10.667: 75.0871% ( 13) 00:43:23.340 10.667 - 10.728: 75.1675% ( 9) 00:43:23.340 10.728 - 10.789: 75.2480% ( 9) 00:43:23.340 10.789 - 10.850: 75.2658% ( 2) 00:43:23.340 10.850 - 10.910: 75.3820% ( 13) 00:43:23.340 10.910 - 10.971: 75.7752% ( 44) 00:43:23.340 10.971 - 11.032: 76.1594% ( 43) 00:43:23.340 11.032 - 11.093: 76.3828% ( 25) 00:43:23.340 11.093 - 11.154: 76.4811% ( 11) 00:43:23.340 11.154 - 11.215: 76.5526% ( 8) 00:43:23.340 11.215 - 11.276: 76.6151% ( 7) 00:43:23.340 11.276 - 11.337: 76.6777% ( 7) 00:43:23.340 11.337 - 11.398: 76.7492% ( 8) 00:43:23.340 11.398 - 11.459: 76.7939% ( 5) 00:43:23.340 11.459 - 11.520: 76.8475% ( 6) 00:43:23.340 11.520 - 11.581: 76.9458% ( 11) 00:43:23.340 11.581 - 11.642: 76.9815% ( 4) 00:43:23.340 11.642 - 11.703: 77.0530% ( 8) 00:43:23.340 11.703 - 11.764: 77.0977% ( 5) 00:43:23.340 11.764 - 11.825: 77.1781% ( 9) 00:43:23.340 11.825 - 11.886: 77.3032% ( 14) 00:43:23.340 11.886 - 11.947: 77.4372% ( 15) 00:43:23.340 11.947 - 12.008: 77.5445% ( 12) 00:43:23.340 12.008 - 12.069: 77.6785% ( 15) 00:43:23.340 12.069 - 12.130: 77.7321% ( 6) 00:43:23.340 12.130 - 12.190: 77.8393% ( 12) 00:43:23.340 12.190 - 12.251: 77.9287% ( 10) 00:43:23.340 12.251 - 12.312: 78.1521% ( 25) 00:43:23.340 12.312 - 12.373: 78.9474% ( 89) 00:43:23.340 12.373 - 12.434: 82.2357% ( 368) 00:43:23.340 12.434 - 12.495: 86.9538% ( 528) 00:43:23.340 12.495 - 12.556: 90.4030% ( 386) 00:43:23.340 12.556 - 12.617: 92.0204% ( 181) 00:43:23.340 12.617 - 12.678: 92.6637% ( 72) 00:43:23.340 12.678 - 12.739: 93.0480% ( 43) 00:43:23.340 12.739 - 12.800: 93.2982% ( 28) 00:43:23.340 12.800 - 12.861: 93.6199% ( 36) 00:43:23.340 12.861 - 12.922: 93.8701% ( 28) 00:43:23.340 12.922 - 12.983: 94.0756% ( 23) 00:43:23.340 12.983 - 13.044: 94.2543% ( 20) 00:43:23.340 13.044 - 13.105: 94.4241% ( 19) 00:43:23.340 13.105 - 13.166: 94.5581% ( 15) 00:43:23.340 13.166 - 13.227: 94.6385% ( 9) 00:43:23.340 13.227 - 13.288: 94.7547% ( 13) 00:43:23.340 13.288 - 13.349: 94.8083% ( 6) 00:43:23.340 13.349 - 13.410: 94.8441% ( 4) 00:43:23.340 13.410 - 13.470: 94.9334% ( 10) 00:43:23.340 13.470 - 13.531: 95.0228% ( 10) 00:43:23.340 13.531 - 13.592: 95.0853% ( 7) 00:43:23.340 13.592 - 13.653: 95.1926% ( 12) 00:43:23.340 13.653 - 13.714: 95.3087% ( 13) 00:43:23.340 13.714 - 13.775: 95.4785% ( 19) 00:43:23.340 13.775 - 13.836: 95.5411% ( 7) 00:43:23.340 13.836 - 13.897: 95.5679% ( 3) 00:43:23.340 13.897 - 13.958: 95.6394% ( 8) 00:43:23.340 13.958 - 14.019: 95.7019% ( 7) 00:43:23.340 14.019 - 14.080: 95.7287% ( 3) 00:43:23.340 14.080 - 14.141: 95.7376% ( 1) 00:43:23.341 14.141 - 14.202: 95.7823% ( 5) 00:43:23.341 14.202 - 14.263: 95.8181% ( 4) 00:43:23.341 14.263 - 14.324: 95.8359% ( 2) 00:43:23.341 14.324 - 14.385: 95.8449% ( 1) 00:43:23.341 14.385 - 14.446: 95.8985% ( 6) 00:43:23.341 14.446 - 14.507: 95.9342% ( 4) 00:43:23.341 14.507 - 14.568: 95.9432% ( 1) 00:43:23.341 14.568 - 14.629: 95.9700% ( 3) 00:43:23.341 14.629 - 14.690: 96.0147% ( 5) 00:43:23.341 14.690 - 14.750: 96.0593% ( 5) 00:43:23.341 14.750 - 14.811: 96.0861% ( 3) 00:43:23.341 14.811 - 14.872: 96.0951% ( 1) 00:43:23.341 14.872 - 14.933: 96.1040% ( 1) 00:43:23.341 14.933 - 14.994: 96.1398% ( 4) 00:43:23.341 14.994 - 15.055: 96.1755% ( 4) 00:43:23.341 15.055 - 15.116: 96.2112% ( 4) 00:43:23.341 15.116 - 15.177: 96.2380% ( 3) 00:43:23.341 15.177 - 15.238: 96.2738% ( 4) 00:43:23.341 15.238 - 15.299: 96.3006% ( 3) 00:43:23.341 15.299 - 15.360: 96.3274% ( 3) 00:43:23.341 15.360 - 15.421: 96.3989% ( 8) 00:43:23.341 15.421 - 15.482: 96.4257% ( 3) 00:43:23.341 15.482 - 15.543: 96.4793% ( 6) 00:43:23.341 15.543 - 15.604: 96.5508% ( 8) 00:43:23.341 15.604 - 15.726: 96.6759% ( 14) 00:43:23.341 15.726 - 15.848: 96.7831% ( 12) 00:43:23.341 15.848 - 15.970: 96.8814% ( 11) 00:43:23.341 15.970 - 16.091: 96.9618% ( 9) 00:43:23.341 16.091 - 16.213: 97.0780% ( 13) 00:43:23.341 16.213 - 16.335: 97.1495% ( 8) 00:43:23.341 16.335 - 16.457: 97.2925% ( 16) 00:43:23.341 16.457 - 16.579: 97.3640% ( 8) 00:43:23.341 16.579 - 16.701: 97.4444% ( 9) 00:43:23.341 16.701 - 16.823: 97.5069% ( 7) 00:43:23.341 16.823 - 16.945: 97.6052% ( 11) 00:43:23.341 16.945 - 17.067: 97.7035% ( 11) 00:43:23.341 17.067 - 17.189: 97.7571% ( 6) 00:43:23.341 17.189 - 17.310: 97.8107% ( 6) 00:43:23.341 17.310 - 17.432: 97.8822% ( 8) 00:43:23.341 17.432 - 17.554: 97.9358% ( 6) 00:43:23.341 17.554 - 17.676: 97.9895% ( 6) 00:43:23.341 17.676 - 17.798: 98.0252% ( 4) 00:43:23.341 17.798 - 17.920: 98.0699% ( 5) 00:43:23.341 17.920 - 18.042: 98.0967% ( 3) 00:43:23.341 18.042 - 18.164: 98.1235% ( 3) 00:43:23.341 18.164 - 18.286: 98.1324% ( 1) 00:43:23.341 18.286 - 18.408: 98.1503% ( 2) 00:43:23.341 18.408 - 18.530: 98.1592% ( 1) 00:43:23.341 18.530 - 18.651: 98.1771% ( 2) 00:43:23.341 18.651 - 18.773: 98.2307% ( 6) 00:43:23.341 18.773 - 18.895: 98.2486% ( 2) 00:43:23.341 18.895 - 19.017: 98.2843% ( 4) 00:43:23.341 19.017 - 19.139: 98.3201% ( 4) 00:43:23.341 19.139 - 19.261: 98.3290% ( 1) 00:43:23.341 19.261 - 19.383: 98.3648% ( 4) 00:43:23.341 19.383 - 19.505: 98.3737% ( 1) 00:43:23.341 19.627 - 19.749: 98.4184% ( 5) 00:43:23.341 19.749 - 19.870: 98.4988% ( 9) 00:43:23.341 19.870 - 19.992: 98.6686% ( 19) 00:43:23.341 19.992 - 20.114: 98.8205% ( 17) 00:43:23.341 20.114 - 20.236: 98.9188% ( 11) 00:43:23.341 20.236 - 20.358: 98.9635% ( 5) 00:43:23.341 20.358 - 20.480: 99.1064% ( 16) 00:43:23.341 20.480 - 20.602: 99.2137% ( 12) 00:43:23.341 20.602 - 20.724: 99.2405% ( 3) 00:43:23.341 20.724 - 20.846: 99.2494% ( 1) 00:43:23.341 20.846 - 20.968: 99.2583% ( 1) 00:43:23.341 20.968 - 21.090: 99.2762% ( 2) 00:43:23.341 21.211 - 21.333: 99.2851% ( 1) 00:43:23.341 21.333 - 21.455: 99.2941% ( 1) 00:43:23.341 21.577 - 21.699: 99.3119% ( 2) 00:43:23.341 21.699 - 21.821: 99.3209% ( 1) 00:43:23.341 21.943 - 22.065: 99.3388% ( 2) 00:43:23.341 22.065 - 22.187: 99.3566% ( 2) 00:43:23.341 22.430 - 22.552: 99.3656% ( 1) 00:43:23.341 22.552 - 22.674: 99.3924% ( 3) 00:43:23.341 22.674 - 22.796: 99.4102% ( 2) 00:43:23.341 23.040 - 23.162: 99.4192% ( 1) 00:43:23.341 23.406 - 23.528: 99.4370% ( 2) 00:43:23.341 23.528 - 23.650: 99.4460% ( 1) 00:43:23.341 24.015 - 24.137: 99.4549% ( 1) 00:43:23.341 24.381 - 24.503: 99.4639% ( 1) 00:43:23.341 24.503 - 24.625: 99.4728% ( 1) 00:43:23.341 24.625 - 24.747: 99.4817% ( 1) 00:43:23.341 24.990 - 25.112: 99.4907% ( 1) 00:43:23.341 25.112 - 25.234: 99.5085% ( 2) 00:43:23.341 25.234 - 25.356: 99.5264% ( 2) 00:43:23.341 25.356 - 25.478: 99.5443% ( 2) 00:43:23.341 25.478 - 25.600: 99.5711% ( 3) 00:43:23.341 25.600 - 25.722: 99.6158% ( 5) 00:43:23.341 25.722 - 25.844: 99.6336% ( 2) 00:43:23.341 25.844 - 25.966: 99.6604% ( 3) 00:43:23.341 25.966 - 26.088: 99.6694% ( 1) 00:43:23.341 26.088 - 26.210: 99.6962% ( 3) 00:43:23.341 26.210 - 26.331: 99.7230% ( 3) 00:43:23.341 26.331 - 26.453: 99.7587% ( 4) 00:43:23.341 26.453 - 26.575: 99.7855% ( 3) 00:43:23.341 26.697 - 26.819: 99.8034% ( 2) 00:43:23.341 26.819 - 26.941: 99.8213% ( 2) 00:43:23.341 27.916 - 28.038: 99.8302% ( 1) 00:43:23.341 28.038 - 28.160: 99.8392% ( 1) 00:43:23.341 28.160 - 28.282: 99.8481% ( 1) 00:43:23.341 29.135 - 29.257: 99.8570% ( 1) 00:43:23.341 29.257 - 29.379: 99.8660% ( 1) 00:43:23.341 30.354 - 30.476: 99.8749% ( 1) 00:43:23.341 32.183 - 32.427: 99.8838% ( 1) 00:43:23.341 32.670 - 32.914: 99.8928% ( 1) 00:43:23.341 33.402 - 33.646: 99.9017% ( 1) 00:43:23.341 34.621 - 34.865: 99.9196% ( 2) 00:43:23.341 40.229 - 40.472: 99.9285% ( 1) 00:43:23.341 48.762 - 49.006: 99.9374% ( 1) 00:43:23.341 87.284 - 87.771: 99.9464% ( 1) 00:43:23.341 102.400 - 102.888: 99.9553% ( 1) 00:43:23.341 117.516 - 118.004: 99.9643% ( 1) 00:43:23.341 137.509 - 138.484: 99.9732% ( 1) 00:43:23.341 165.790 - 166.766: 99.9821% ( 1) 00:43:23.341 341.333 - 343.284: 99.9911% ( 1) 00:43:23.341 503.223 - 507.124: 100.0000% ( 1) 00:43:23.341 00:43:23.341 00:43:23.341 real 0m1.385s 00:43:23.341 user 0m1.124s 00:43:23.341 sys 0m0.166s 00:43:23.341 02:13:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:23.341 02:13:23 -- common/autotest_common.sh@10 -- # set +x 00:43:23.341 ************************************ 00:43:23.341 END TEST nvme_overhead 00:43:23.341 ************************************ 00:43:23.341 02:13:23 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:23.341 02:13:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:43:23.341 02:13:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:23.341 02:13:23 -- common/autotest_common.sh@10 -- # set +x 00:43:23.635 ************************************ 00:43:23.635 START TEST nvme_arbitration 00:43:23.635 ************************************ 00:43:23.635 02:13:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:26.928 Initializing NVMe Controllers 00:43:26.928 Attached to 0000:00:10.0 00:43:26.928 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:43:26.928 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:43:26.928 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:43:26.928 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:43:26.928 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:43:26.928 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:43:26.928 Initialization complete. Launching workers. 00:43:26.928 Starting thread on core 1 with urgent priority queue 00:43:26.928 Starting thread on core 2 with urgent priority queue 00:43:26.928 Starting thread on core 3 with urgent priority queue 00:43:26.928 Starting thread on core 0 with urgent priority queue 00:43:26.928 QEMU NVMe Ctrl (12340 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:43:26.928 QEMU NVMe Ctrl (12340 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:43:26.928 QEMU NVMe Ctrl (12340 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:43:26.928 QEMU NVMe Ctrl (12340 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:43:26.928 ======================================================== 00:43:26.928 00:43:26.928 00:43:26.928 real 0m3.475s 00:43:26.928 user 0m9.391s 00:43:26.928 sys 0m0.136s 00:43:26.928 02:13:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:26.928 02:13:26 -- common/autotest_common.sh@10 -- # set +x 00:43:26.928 ************************************ 00:43:26.928 END TEST nvme_arbitration 00:43:26.928 ************************************ 00:43:26.928 02:13:26 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:26.928 02:13:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:43:26.928 02:13:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:26.928 02:13:26 -- common/autotest_common.sh@10 -- # set +x 00:43:26.928 ************************************ 00:43:26.928 START TEST nvme_single_aen 00:43:26.928 ************************************ 00:43:26.928 02:13:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:27.495 Asynchronous Event Request test 00:43:27.495 Attached to 0000:00:10.0 00:43:27.495 Reset controller to setup AER completions for this process 00:43:27.495 Registering asynchronous event callbacks... 00:43:27.495 Getting orig temperature thresholds of all controllers 00:43:27.495 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:27.495 Setting all controllers temperature threshold low to trigger AER 00:43:27.495 Waiting for all controllers temperature threshold to be set lower 00:43:27.495 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:27.495 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:27.495 Waiting for all controllers to trigger AER and reset threshold 00:43:27.495 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:27.495 Cleaning up... 00:43:27.495 00:43:27.495 real 0m0.340s 00:43:27.495 user 0m0.137s 00:43:27.495 sys 0m0.138s 00:43:27.495 02:13:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:27.495 02:13:27 -- common/autotest_common.sh@10 -- # set +x 00:43:27.495 ************************************ 00:43:27.495 END TEST nvme_single_aen 00:43:27.495 ************************************ 00:43:27.495 02:13:27 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:43:27.495 02:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:27.495 02:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:27.495 02:13:27 -- common/autotest_common.sh@10 -- # set +x 00:43:27.495 ************************************ 00:43:27.495 START TEST nvme_doorbell_aers 00:43:27.495 ************************************ 00:43:27.495 02:13:27 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:43:27.495 02:13:27 -- nvme/nvme.sh@70 -- # bdfs=() 00:43:27.495 02:13:27 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:43:27.495 02:13:27 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:43:27.495 02:13:27 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:43:27.495 02:13:27 -- common/autotest_common.sh@1499 -- # bdfs=() 00:43:27.495 02:13:27 -- common/autotest_common.sh@1499 -- # local bdfs 00:43:27.495 02:13:27 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:27.495 02:13:27 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:27.495 02:13:27 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:43:27.495 02:13:27 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:43:27.495 02:13:27 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:43:27.495 02:13:27 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:43:27.495 02:13:27 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:43:27.753 [2024-04-24 02:13:27.830715] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149736) is not found. Dropping the request. 00:43:37.723 Executing: test_write_invalid_db 00:43:37.723 Waiting for AER completion... 00:43:37.723 Failure: test_write_invalid_db 00:43:37.723 00:43:37.723 Executing: test_invalid_db_write_overflow_sq 00:43:37.723 Waiting for AER completion... 00:43:37.723 Failure: test_invalid_db_write_overflow_sq 00:43:37.723 00:43:37.723 Executing: test_invalid_db_write_overflow_cq 00:43:37.723 Waiting for AER completion... 00:43:37.723 Failure: test_invalid_db_write_overflow_cq 00:43:37.723 00:43:37.723 00:43:37.723 real 0m10.123s 00:43:37.723 user 0m7.262s 00:43:37.723 sys 0m2.787s 00:43:37.723 02:13:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:37.723 ************************************ 00:43:37.723 02:13:37 -- common/autotest_common.sh@10 -- # set +x 00:43:37.723 END TEST nvme_doorbell_aers 00:43:37.723 ************************************ 00:43:37.723 02:13:37 -- nvme/nvme.sh@97 -- # uname 00:43:37.723 02:13:37 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:43:37.723 02:13:37 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:37.723 02:13:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:43:37.723 02:13:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:37.723 02:13:37 -- common/autotest_common.sh@10 -- # set +x 00:43:37.723 ************************************ 00:43:37.723 START TEST nvme_multi_aen 00:43:37.723 ************************************ 00:43:37.723 02:13:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:37.982 [2024-04-24 02:13:37.916567] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149736) is not found. Dropping the request. 00:43:37.982 [2024-04-24 02:13:37.917328] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149736) is not found. Dropping the request. 00:43:37.982 [2024-04-24 02:13:37.917523] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149736) is not found. Dropping the request. 00:43:37.982 Child process pid: 149931 00:43:38.241 [Child] Asynchronous Event Request test 00:43:38.241 [Child] Attached to 0000:00:10.0 00:43:38.241 [Child] Registering asynchronous event callbacks... 00:43:38.241 [Child] Getting orig temperature thresholds of all controllers 00:43:38.241 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:38.241 [Child] Waiting for all controllers to trigger AER and reset threshold 00:43:38.241 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:38.241 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:38.241 [Child] Cleaning up... 00:43:38.500 Asynchronous Event Request test 00:43:38.500 Attached to 0000:00:10.0 00:43:38.500 Reset controller to setup AER completions for this process 00:43:38.500 Registering asynchronous event callbacks... 00:43:38.500 Getting orig temperature thresholds of all controllers 00:43:38.500 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:38.500 Setting all controllers temperature threshold low to trigger AER 00:43:38.500 Waiting for all controllers temperature threshold to be set lower 00:43:38.500 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:38.500 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:38.500 Waiting for all controllers to trigger AER and reset threshold 00:43:38.500 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:38.500 Cleaning up... 00:43:38.500 00:43:38.500 real 0m0.711s 00:43:38.500 user 0m0.279s 00:43:38.500 sys 0m0.277s 00:43:38.500 02:13:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:38.500 ************************************ 00:43:38.500 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:43:38.500 END TEST nvme_multi_aen 00:43:38.500 ************************************ 00:43:38.500 02:13:38 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:38.500 02:13:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:43:38.500 02:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:38.500 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:43:38.500 ************************************ 00:43:38.500 START TEST nvme_startup 00:43:38.500 ************************************ 00:43:38.500 02:13:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:39.067 Initializing NVMe Controllers 00:43:39.067 Attached to 0000:00:10.0 00:43:39.067 Initialization complete. 00:43:39.067 Time used:237651.859 (us). 00:43:39.067 00:43:39.067 real 0m0.368s 00:43:39.067 user 0m0.115s 00:43:39.067 sys 0m0.183s 00:43:39.067 02:13:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:39.067 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:43:39.067 ************************************ 00:43:39.067 END TEST nvme_startup 00:43:39.067 ************************************ 00:43:39.067 02:13:38 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:43:39.067 02:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:39.067 02:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:39.067 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:43:39.067 ************************************ 00:43:39.067 START TEST nvme_multi_secondary 00:43:39.067 ************************************ 00:43:39.067 02:13:38 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:43:39.067 02:13:38 -- nvme/nvme.sh@52 -- # pid0=150015 00:43:39.068 02:13:38 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:43:39.068 02:13:38 -- nvme/nvme.sh@54 -- # pid1=150016 00:43:39.068 02:13:38 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:43:39.068 02:13:38 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:43:42.354 Initializing NVMe Controllers 00:43:42.354 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:42.354 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:43:42.354 Initialization complete. Launching workers. 00:43:42.354 ======================================================== 00:43:42.354 Latency(us) 00:43:42.354 Device Information : IOPS MiB/s Average min max 00:43:42.354 PCIE (0000:00:10.0) NSID 1 from core 1: 34553.40 134.97 462.71 164.88 2371.61 00:43:42.354 ======================================================== 00:43:42.354 Total : 34553.40 134.97 462.71 164.88 2371.61 00:43:42.354 00:43:42.631 Initializing NVMe Controllers 00:43:42.631 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:42.631 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:43:42.631 Initialization complete. Launching workers. 00:43:42.631 ======================================================== 00:43:42.631 Latency(us) 00:43:42.631 Device Information : IOPS MiB/s Average min max 00:43:42.631 PCIE (0000:00:10.0) NSID 1 from core 2: 14853.80 58.02 1076.46 165.94 28626.14 00:43:42.631 ======================================================== 00:43:42.631 Total : 14853.80 58.02 1076.46 165.94 28626.14 00:43:42.631 00:43:42.631 02:13:42 -- nvme/nvme.sh@56 -- # wait 150015 00:43:44.532 Initializing NVMe Controllers 00:43:44.532 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:44.532 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:44.532 Initialization complete. Launching workers. 00:43:44.532 ======================================================== 00:43:44.532 Latency(us) 00:43:44.532 Device Information : IOPS MiB/s Average min max 00:43:44.532 PCIE (0000:00:10.0) NSID 1 from core 0: 40921.60 159.85 390.66 154.73 1872.42 00:43:44.532 ======================================================== 00:43:44.532 Total : 40921.60 159.85 390.66 154.73 1872.42 00:43:44.532 00:43:44.532 02:13:44 -- nvme/nvme.sh@57 -- # wait 150016 00:43:44.532 02:13:44 -- nvme/nvme.sh@61 -- # pid0=150091 00:43:44.532 02:13:44 -- nvme/nvme.sh@63 -- # pid1=150092 00:43:44.533 02:13:44 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:43:44.533 02:13:44 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:43:44.533 02:13:44 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:43:47.903 Initializing NVMe Controllers 00:43:47.903 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:47.903 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:47.903 Initialization complete. Launching workers. 00:43:47.903 ======================================================== 00:43:47.903 Latency(us) 00:43:47.903 Device Information : IOPS MiB/s Average min max 00:43:47.903 PCIE (0000:00:10.0) NSID 1 from core 0: 30826.67 120.42 518.66 166.33 2291.41 00:43:47.903 ======================================================== 00:43:47.903 Total : 30826.67 120.42 518.66 166.33 2291.41 00:43:47.903 00:43:48.471 Initializing NVMe Controllers 00:43:48.471 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:48.471 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:43:48.471 Initialization complete. Launching workers. 00:43:48.471 ======================================================== 00:43:48.471 Latency(us) 00:43:48.471 Device Information : IOPS MiB/s Average min max 00:43:48.471 PCIE (0000:00:10.0) NSID 1 from core 1: 31678.84 123.75 504.72 159.90 2466.43 00:43:48.471 ======================================================== 00:43:48.471 Total : 31678.84 123.75 504.72 159.90 2466.43 00:43:48.471 00:43:50.370 Initializing NVMe Controllers 00:43:50.370 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:50.370 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:43:50.370 Initialization complete. Launching workers. 00:43:50.370 ======================================================== 00:43:50.370 Latency(us) 00:43:50.370 Device Information : IOPS MiB/s Average min max 00:43:50.370 PCIE (0000:00:10.0) NSID 1 from core 2: 17110.32 66.84 933.95 128.30 21252.11 00:43:50.370 ======================================================== 00:43:50.370 Total : 17110.32 66.84 933.95 128.30 21252.11 00:43:50.370 00:43:50.370 02:13:50 -- nvme/nvme.sh@65 -- # wait 150091 00:43:50.370 02:13:50 -- nvme/nvme.sh@66 -- # wait 150092 00:43:50.370 00:43:50.370 real 0m11.220s 00:43:50.370 user 0m18.826s 00:43:50.370 sys 0m0.976s 00:43:50.370 02:13:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:50.370 02:13:50 -- common/autotest_common.sh@10 -- # set +x 00:43:50.370 ************************************ 00:43:50.370 END TEST nvme_multi_secondary 00:43:50.370 ************************************ 00:43:50.370 02:13:50 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:43:50.370 02:13:50 -- nvme/nvme.sh@102 -- # kill_stub 00:43:50.370 02:13:50 -- common/autotest_common.sh@1075 -- # [[ -e /proc/149248 ]] 00:43:50.370 02:13:50 -- common/autotest_common.sh@1076 -- # kill 149248 00:43:50.370 02:13:50 -- common/autotest_common.sh@1077 -- # wait 149248 00:43:50.370 [2024-04-24 02:13:50.236938] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149930) is not found. Dropping the request. 00:43:50.371 [2024-04-24 02:13:50.237218] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149930) is not found. Dropping the request. 00:43:50.371 [2024-04-24 02:13:50.237354] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149930) is not found. Dropping the request. 00:43:50.371 [2024-04-24 02:13:50.237480] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149930) is not found. Dropping the request. 00:43:50.628 [2024-04-24 02:13:50.541600] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:43:50.628 02:13:50 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:43:50.628 02:13:50 -- common/autotest_common.sh@1083 -- # echo 2 00:43:50.628 02:13:50 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:43:50.628 02:13:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:50.628 02:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:50.628 02:13:50 -- common/autotest_common.sh@10 -- # set +x 00:43:50.628 ************************************ 00:43:50.628 START TEST bdev_nvme_reset_stuck_adm_cmd 00:43:50.628 ************************************ 00:43:50.628 02:13:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:43:50.965 * Looking for test storage... 00:43:50.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:43:50.965 02:13:50 -- common/autotest_common.sh@1510 -- # bdfs=() 00:43:50.965 02:13:50 -- common/autotest_common.sh@1510 -- # local bdfs 00:43:50.965 02:13:50 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:43:50.965 02:13:50 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:43:50.965 02:13:50 -- common/autotest_common.sh@1499 -- # bdfs=() 00:43:50.965 02:13:50 -- common/autotest_common.sh@1499 -- # local bdfs 00:43:50.965 02:13:50 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:50.965 02:13:50 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:43:50.965 02:13:50 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:50.965 02:13:50 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:43:50.965 02:13:50 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:43:50.965 02:13:50 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:43:50.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=150247 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:50.965 02:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 150247 00:43:50.965 02:13:50 -- common/autotest_common.sh@817 -- # '[' -z 150247 ']' 00:43:50.965 02:13:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:50.965 02:13:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:43:50.965 02:13:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:50.965 02:13:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:43:50.965 02:13:50 -- common/autotest_common.sh@10 -- # set +x 00:43:50.965 [2024-04-24 02:13:50.899042] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:43:50.965 [2024-04-24 02:13:50.899198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150247 ] 00:43:51.223 [2024-04-24 02:13:51.111153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:51.481 [2024-04-24 02:13:51.389090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:51.481 [2024-04-24 02:13:51.389531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:51.481 [2024-04-24 02:13:51.389708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.481 [2024-04-24 02:13:51.389709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:43:52.415 02:13:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:43:52.415 02:13:52 -- common/autotest_common.sh@850 -- # return 0 00:43:52.415 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:43:52.415 02:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:52.416 02:13:52 -- common/autotest_common.sh@10 -- # set +x 00:43:52.416 nvme0n1 00:43:52.416 02:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rVktw.txt 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:43:52.416 02:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:52.416 02:13:52 -- common/autotest_common.sh@10 -- # set +x 00:43:52.416 true 00:43:52.416 02:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713924832 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=150283 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:43:52.416 02:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:43:54.946 02:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:54.946 02:13:54 -- common/autotest_common.sh@10 -- # set +x 00:43:54.946 [2024-04-24 02:13:54.468785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:43:54.946 [2024-04-24 02:13:54.469289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:54.946 [2024-04-24 02:13:54.469387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:43:54.946 [2024-04-24 02:13:54.469481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:54.946 [2024-04-24 02:13:54.471327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:54.946 02:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:54.946 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 150283 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 150283 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 150283 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:43:54.946 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:43:54.946 02:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:43:54.946 02:13:54 -- common/autotest_common.sh@10 -- # set +x 00:43:54.947 02:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rVktw.txt 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rVktw.txt 00:43:54.947 02:13:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 150247 00:43:54.947 02:13:54 -- common/autotest_common.sh@936 -- # '[' -z 150247 ']' 00:43:54.947 02:13:54 -- common/autotest_common.sh@940 -- # kill -0 150247 00:43:54.947 02:13:54 -- common/autotest_common.sh@941 -- # uname 00:43:54.947 02:13:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:43:54.947 02:13:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150247 00:43:54.947 02:13:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:43:54.947 killing process with pid 150247 00:43:54.947 02:13:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:43:54.947 02:13:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150247' 00:43:54.947 02:13:54 -- common/autotest_common.sh@955 -- # kill 150247 00:43:54.947 02:13:54 -- common/autotest_common.sh@960 -- # wait 150247 00:43:57.491 02:13:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:43:57.491 02:13:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:43:57.491 00:43:57.491 real 0m6.630s 00:43:57.491 user 0m22.841s 00:43:57.491 sys 0m0.727s 00:43:57.491 02:13:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:57.491 02:13:57 -- common/autotest_common.sh@10 -- # set +x 00:43:57.491 ************************************ 00:43:57.491 END TEST bdev_nvme_reset_stuck_adm_cmd 00:43:57.491 ************************************ 00:43:57.491 02:13:57 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:43:57.491 02:13:57 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:43:57.491 02:13:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:57.491 02:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:57.491 02:13:57 -- common/autotest_common.sh@10 -- # set +x 00:43:57.491 ************************************ 00:43:57.491 START TEST nvme_fio 00:43:57.491 ************************************ 00:43:57.491 02:13:57 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:43:57.491 02:13:57 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:43:57.491 02:13:57 -- nvme/nvme.sh@32 -- # ran_fio=false 00:43:57.491 02:13:57 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:43:57.491 02:13:57 -- common/autotest_common.sh@1499 -- # bdfs=() 00:43:57.491 02:13:57 -- common/autotest_common.sh@1499 -- # local bdfs 00:43:57.491 02:13:57 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:57.491 02:13:57 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:57.491 02:13:57 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:43:57.491 02:13:57 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:43:57.491 02:13:57 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:43:57.491 02:13:57 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:43:57.491 02:13:57 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:43:57.491 02:13:57 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:43:57.491 02:13:57 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:43:57.491 02:13:57 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:43:57.750 02:13:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:43:57.750 02:13:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:43:58.007 02:13:57 -- nvme/nvme.sh@41 -- # bs=4096 00:43:58.007 02:13:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:43:58.007 02:13:57 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:43:58.007 02:13:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:43:58.007 02:13:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:58.007 02:13:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:43:58.007 02:13:57 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:43:58.007 02:13:57 -- common/autotest_common.sh@1327 -- # shift 00:43:58.007 02:13:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:43:58.007 02:13:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:43:58.007 02:13:57 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:43:58.007 02:13:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:43:58.007 02:13:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:43:58.007 02:13:57 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:43:58.007 02:13:57 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:43:58.007 02:13:57 -- common/autotest_common.sh@1333 -- # break 00:43:58.007 02:13:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:43:58.007 02:13:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:43:58.265 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:43:58.265 fio-3.35 00:43:58.265 Starting 1 thread 00:44:01.547 00:44:01.547 test: (groupid=0, jobs=1): err= 0: pid=150444: Wed Apr 24 02:14:01 2024 00:44:01.547 read: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(147MiB/2001msec) 00:44:01.547 slat (usec): min=4, max=114, avg= 5.51, stdev= 1.35 00:44:01.547 clat (usec): min=209, max=8806, avg=3376.48, stdev=668.29 00:44:01.547 lat (usec): min=214, max=8811, avg=3381.99, stdev=668.85 00:44:01.547 clat percentiles (usec): 00:44:01.547 | 1.00th=[ 1614], 5.00th=[ 2114], 10.00th=[ 2540], 20.00th=[ 2999], 00:44:01.547 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3359], 60.00th=[ 3490], 00:44:01.547 | 70.00th=[ 3687], 80.00th=[ 3916], 90.00th=[ 4047], 95.00th=[ 4178], 00:44:01.547 | 99.00th=[ 5014], 99.50th=[ 5932], 99.90th=[ 7832], 99.95th=[ 8356], 00:44:01.547 | 99.99th=[ 8455] 00:44:01.547 bw ( KiB/s): min=67417, max=74160, per=92.52%, avg=69821.67, stdev=3764.48, samples=3 00:44:01.547 iops : min=16854, max=18540, avg=17455.33, stdev=941.20, samples=3 00:44:01.547 write: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(148MiB/2001msec); 0 zone resets 00:44:01.547 slat (nsec): min=4481, max=81197, avg=5732.40, stdev=1282.63 00:44:01.547 clat (usec): min=200, max=8571, avg=3377.34, stdev=671.00 00:44:01.547 lat (usec): min=205, max=8577, avg=3383.07, stdev=671.56 00:44:01.547 clat percentiles (usec): 00:44:01.547 | 1.00th=[ 1582], 5.00th=[ 2114], 10.00th=[ 2540], 20.00th=[ 2999], 00:44:01.547 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3359], 60.00th=[ 3523], 00:44:01.547 | 70.00th=[ 3720], 80.00th=[ 3916], 90.00th=[ 4080], 95.00th=[ 4178], 00:44:01.547 | 99.00th=[ 4948], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 8455], 00:44:01.547 | 99.99th=[ 8455] 00:44:01.547 bw ( KiB/s): min=67616, max=74168, per=92.50%, avg=69850.67, stdev=3739.69, samples=3 00:44:01.547 iops : min=16904, max=18542, avg=17462.67, stdev=934.92, samples=3 00:44:01.547 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.05% 00:44:01.547 lat (msec) : 2=3.66%, 4=82.52%, 10=13.74% 00:44:01.547 cpu : usr=99.95%, sys=0.00%, ctx=11, majf=0, minf=36 00:44:01.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:44:01.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:01.547 issued rwts: total=37751,37775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:01.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:01.547 00:44:01.547 Run status group 0 (all jobs): 00:44:01.547 READ: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=147MiB (155MB), run=2001-2001msec 00:44:01.547 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2001-2001msec 00:44:01.547 ----------------------------------------------------- 00:44:01.547 Suppressions used: 00:44:01.547 count bytes template 00:44:01.547 1 32 /usr/src/fio/parse.c 00:44:01.547 ----------------------------------------------------- 00:44:01.547 00:44:01.547 02:14:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:44:01.547 02:14:01 -- nvme/nvme.sh@46 -- # true 00:44:01.547 00:44:01.547 real 0m4.213s 00:44:01.547 user 0m3.472s 00:44:01.547 sys 0m0.428s 00:44:01.547 02:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:01.547 ************************************ 00:44:01.547 END TEST nvme_fio 00:44:01.547 ************************************ 00:44:01.547 02:14:01 -- common/autotest_common.sh@10 -- # set +x 00:44:01.547 00:44:01.547 real 0m49.842s 00:44:01.547 user 2m11.947s 00:44:01.547 sys 0m10.750s 00:44:01.547 02:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:01.547 02:14:01 -- common/autotest_common.sh@10 -- # set +x 00:44:01.547 ************************************ 00:44:01.547 END TEST nvme 00:44:01.547 ************************************ 00:44:01.806 02:14:01 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:44:01.806 02:14:01 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:01.806 02:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:01.806 02:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:01.806 02:14:01 -- common/autotest_common.sh@10 -- # set +x 00:44:01.806 ************************************ 00:44:01.806 START TEST nvme_scc 00:44:01.806 ************************************ 00:44:01.806 02:14:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:01.806 * Looking for test storage... 00:44:01.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:01.806 02:14:01 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:01.806 02:14:01 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:01.806 02:14:01 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:44:01.806 02:14:01 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:01.806 02:14:01 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:01.806 02:14:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:01.806 02:14:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:01.806 02:14:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:01.806 02:14:01 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:01.806 02:14:01 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:01.806 02:14:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:01.806 02:14:01 -- paths/export.sh@5 -- # export PATH 00:44:01.806 02:14:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:01.806 02:14:01 -- nvme/functions.sh@10 -- # ctrls=() 00:44:01.806 02:14:01 -- nvme/functions.sh@10 -- # declare -A ctrls 00:44:01.806 02:14:01 -- nvme/functions.sh@11 -- # nvmes=() 00:44:01.806 02:14:01 -- nvme/functions.sh@11 -- # declare -A nvmes 00:44:01.806 02:14:01 -- nvme/functions.sh@12 -- # bdfs=() 00:44:01.806 02:14:01 -- nvme/functions.sh@12 -- # declare -A bdfs 00:44:01.806 02:14:01 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:44:01.806 02:14:01 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:44:01.806 02:14:01 -- nvme/functions.sh@14 -- # nvme_name= 00:44:01.806 02:14:01 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:01.806 02:14:01 -- nvme/nvme_scc.sh@12 -- # uname 00:44:01.806 02:14:01 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:44:01.806 02:14:01 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:44:01.806 02:14:01 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:02.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:02.375 Waiting for block devices as requested 00:44:02.375 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:02.375 02:14:02 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:44:02.375 02:14:02 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:44:02.375 02:14:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:44:02.375 02:14:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:44:02.375 02:14:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:44:02.375 02:14:02 -- scripts/common.sh@15 -- # local i 00:44:02.375 02:14:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:02.375 02:14:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:02.375 02:14:02 -- scripts/common.sh@24 -- # return 0 00:44:02.375 02:14:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:44:02.375 02:14:02 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:44:02.375 02:14:02 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@18 -- # shift 00:44:02.375 02:14:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:44:02.375 02:14:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:44:02.375 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.375 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.375 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:44:02.376 02:14:02 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.376 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.376 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.377 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.377 02:14:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:44:02.377 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:44:02.378 02:14:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:44:02.378 02:14:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:44:02.378 02:14:02 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:44:02.378 02:14:02 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@18 -- # shift 00:44:02.378 02:14:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.378 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.378 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:44:02.378 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:44:02.379 02:14:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # IFS=: 00:44:02.379 02:14:02 -- nvme/functions.sh@21 -- # read -r reg val 00:44:02.379 02:14:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:44:02.379 02:14:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:44:02.379 02:14:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:44:02.379 02:14:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:44:02.379 02:14:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:44:02.379 02:14:02 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:44:02.379 02:14:02 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:44:02.379 02:14:02 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:44:02.379 02:14:02 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:44:02.379 02:14:02 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:44:02.379 02:14:02 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:44:02.379 02:14:02 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:44:02.379 02:14:02 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:44:02.379 02:14:02 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:44:02.379 02:14:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:44:02.379 02:14:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:44:02.379 02:14:02 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:44:02.698 02:14:02 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:44:02.698 02:14:02 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:44:02.698 02:14:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:44:02.698 02:14:02 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:44:02.698 02:14:02 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:44:02.698 02:14:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:44:02.698 02:14:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:44:02.698 02:14:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:44:02.698 02:14:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:44:02.698 02:14:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:44:02.698 02:14:02 -- nvme/functions.sh@197 -- # echo nvme0 00:44:02.698 02:14:02 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:44:02.698 02:14:02 -- nvme/functions.sh@206 -- # echo nvme0 00:44:02.698 02:14:02 -- nvme/functions.sh@207 -- # return 0 00:44:02.698 02:14:02 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:44:02.698 02:14:02 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:44:02.698 02:14:02 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:02.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:02.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:03.891 02:14:03 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:03.891 02:14:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:44:03.891 02:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:03.891 02:14:03 -- common/autotest_common.sh@10 -- # set +x 00:44:03.891 ************************************ 00:44:03.891 START TEST nvme_simple_copy 00:44:03.891 ************************************ 00:44:03.891 02:14:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:04.457 Initializing NVMe Controllers 00:44:04.457 Attaching to 0000:00:10.0 00:44:04.457 Controller supports SCC. Attached to 0000:00:10.0 00:44:04.457 Namespace ID: 1 size: 5GB 00:44:04.457 Initialization complete. 00:44:04.457 00:44:04.457 Controller QEMU NVMe Ctrl (12340 ) 00:44:04.457 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:44:04.457 Namespace Block Size:4096 00:44:04.457 Writing LBAs 0 to 63 with Random Data 00:44:04.457 Copied LBAs from 0 - 63 to the Destination LBA 256 00:44:04.457 LBAs matching Written Data: 64 00:44:04.457 00:44:04.457 real 0m0.356s 00:44:04.457 user 0m0.145s 00:44:04.457 sys 0m0.112s 00:44:04.457 02:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:04.457 02:14:04 -- common/autotest_common.sh@10 -- # set +x 00:44:04.457 ************************************ 00:44:04.457 END TEST nvme_simple_copy 00:44:04.457 ************************************ 00:44:04.457 00:44:04.457 real 0m2.676s 00:44:04.457 user 0m0.772s 00:44:04.457 sys 0m1.826s 00:44:04.457 02:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:04.457 02:14:04 -- common/autotest_common.sh@10 -- # set +x 00:44:04.457 ************************************ 00:44:04.457 END TEST nvme_scc 00:44:04.457 ************************************ 00:44:04.457 02:14:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:44:04.457 02:14:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:44:04.457 02:14:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:44:04.457 02:14:04 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:44:04.457 02:14:04 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:44:04.457 02:14:04 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:04.457 02:14:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:04.457 02:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:04.457 02:14:04 -- common/autotest_common.sh@10 -- # set +x 00:44:04.457 ************************************ 00:44:04.457 START TEST nvme_rpc 00:44:04.457 ************************************ 00:44:04.457 02:14:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:04.716 * Looking for test storage... 00:44:04.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:44:04.716 02:14:04 -- common/autotest_common.sh@1510 -- # bdfs=() 00:44:04.716 02:14:04 -- common/autotest_common.sh@1510 -- # local bdfs 00:44:04.716 02:14:04 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:44:04.716 02:14:04 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:44:04.716 02:14:04 -- common/autotest_common.sh@1499 -- # bdfs=() 00:44:04.716 02:14:04 -- common/autotest_common.sh@1499 -- # local bdfs 00:44:04.716 02:14:04 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:04.716 02:14:04 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:04.716 02:14:04 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:44:04.716 02:14:04 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:44:04.716 02:14:04 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:44:04.716 02:14:04 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=150942 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:44:04.716 02:14:04 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 150942 00:44:04.716 02:14:04 -- common/autotest_common.sh@817 -- # '[' -z 150942 ']' 00:44:04.716 02:14:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:04.716 02:14:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:04.716 02:14:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:04.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:04.716 02:14:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:04.716 02:14:04 -- common/autotest_common.sh@10 -- # set +x 00:44:04.716 [2024-04-24 02:14:04.710995] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:04.716 [2024-04-24 02:14:04.711579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150942 ] 00:44:04.974 [2024-04-24 02:14:04.877004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:05.232 [2024-04-24 02:14:05.093058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.232 [2024-04-24 02:14:05.093058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.193 02:14:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:06.193 02:14:05 -- common/autotest_common.sh@850 -- # return 0 00:44:06.193 02:14:05 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:44:06.193 Nvme0n1 00:44:06.480 02:14:06 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:44:06.480 02:14:06 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:44:06.480 request: 00:44:06.480 { 00:44:06.480 "filename": "non_existing_file", 00:44:06.480 "bdev_name": "Nvme0n1", 00:44:06.480 "method": "bdev_nvme_apply_firmware", 00:44:06.480 "req_id": 1 00:44:06.480 } 00:44:06.480 Got JSON-RPC error response 00:44:06.480 response: 00:44:06.480 { 00:44:06.480 "code": -32603, 00:44:06.480 "message": "open file failed." 00:44:06.480 } 00:44:06.480 02:14:06 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:44:06.480 02:14:06 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:44:06.480 02:14:06 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:44:06.744 02:14:06 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:44:06.744 02:14:06 -- nvme/nvme_rpc.sh@40 -- # killprocess 150942 00:44:06.744 02:14:06 -- common/autotest_common.sh@936 -- # '[' -z 150942 ']' 00:44:06.744 02:14:06 -- common/autotest_common.sh@940 -- # kill -0 150942 00:44:06.744 02:14:06 -- common/autotest_common.sh@941 -- # uname 00:44:06.744 02:14:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:06.744 02:14:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150942 00:44:06.744 02:14:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:06.744 02:14:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:06.744 killing process with pid 150942 00:44:06.744 02:14:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150942' 00:44:06.744 02:14:06 -- common/autotest_common.sh@955 -- # kill 150942 00:44:06.744 02:14:06 -- common/autotest_common.sh@960 -- # wait 150942 00:44:09.278 00:44:09.278 real 0m4.724s 00:44:09.278 user 0m8.914s 00:44:09.278 sys 0m0.649s 00:44:09.278 02:14:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:09.278 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:44:09.278 ************************************ 00:44:09.278 END TEST nvme_rpc 00:44:09.278 ************************************ 00:44:09.278 02:14:09 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:09.278 02:14:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:09.278 02:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:09.278 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:44:09.278 ************************************ 00:44:09.278 START TEST nvme_rpc_timeouts 00:44:09.278 ************************************ 00:44:09.278 02:14:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:09.537 * Looking for test storage... 00:44:09.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_151027 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_151027 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=151054 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:09.537 02:14:09 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 151054 00:44:09.537 02:14:09 -- common/autotest_common.sh@817 -- # '[' -z 151054 ']' 00:44:09.537 02:14:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:09.537 02:14:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:09.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:09.537 02:14:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:09.537 02:14:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:09.537 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:44:09.537 [2024-04-24 02:14:09.463815] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:09.537 [2024-04-24 02:14:09.464041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151054 ] 00:44:09.796 [2024-04-24 02:14:09.635170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:09.796 [2024-04-24 02:14:09.856607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:09.796 [2024-04-24 02:14:09.856613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:10.731 02:14:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:10.731 02:14:10 -- common/autotest_common.sh@850 -- # return 0 00:44:10.731 Checking default timeout settings: 00:44:10.731 02:14:10 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:44:10.731 02:14:10 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:11.297 Making settings changes with rpc: 00:44:11.297 02:14:11 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:44:11.298 02:14:11 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:44:11.298 Check default vs. modified settings: 00:44:11.298 02:14:11 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:44:11.298 02:14:11 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_151027 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_151027 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:44:11.864 Setting action_on_timeout is changed as expected. 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_151027 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.864 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_151027 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:44:11.865 Setting timeout_us is changed as expected. 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_151027 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_151027 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:44:11.865 Setting timeout_admin_us is changed as expected. 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_151027 /tmp/settings_modified_151027 00:44:11.865 02:14:11 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 151054 00:44:11.865 02:14:11 -- common/autotest_common.sh@936 -- # '[' -z 151054 ']' 00:44:11.865 02:14:11 -- common/autotest_common.sh@940 -- # kill -0 151054 00:44:11.865 02:14:11 -- common/autotest_common.sh@941 -- # uname 00:44:11.865 02:14:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:11.865 02:14:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151054 00:44:11.865 02:14:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:11.865 killing process with pid 151054 00:44:11.865 02:14:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:11.865 02:14:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151054' 00:44:11.865 02:14:11 -- common/autotest_common.sh@955 -- # kill 151054 00:44:11.865 02:14:11 -- common/autotest_common.sh@960 -- # wait 151054 00:44:14.404 RPC TIMEOUT SETTING TEST PASSED. 00:44:14.404 02:14:14 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:44:14.404 00:44:14.404 real 0m5.112s 00:44:14.404 user 0m9.719s 00:44:14.404 sys 0m0.693s 00:44:14.404 02:14:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:14.404 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:44:14.404 ************************************ 00:44:14.404 END TEST nvme_rpc_timeouts 00:44:14.404 ************************************ 00:44:14.405 02:14:14 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:44:14.405 02:14:14 -- spdk/autotest.sh@245 -- # [[ 0 -eq 1 ]] 00:44:14.405 02:14:14 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:44:14.405 02:14:14 -- spdk/autotest.sh@258 -- # timing_exit lib 00:44:14.405 02:14:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:44:14.405 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:44:14.662 02:14:14 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:44:14.662 02:14:14 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:44:14.662 02:14:14 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:44:14.662 02:14:14 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:44:14.662 02:14:14 -- spdk/autotest.sh@373 -- # [[ 1 -eq 1 ]] 00:44:14.662 02:14:14 -- spdk/autotest.sh@374 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:44:14.662 02:14:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:44:14.662 02:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:14.662 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:44:14.662 ************************************ 00:44:14.662 START TEST blockdev_raid5f 00:44:14.662 ************************************ 00:44:14.662 02:14:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:44:14.662 * Looking for test storage... 00:44:14.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:44:14.662 02:14:14 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:44:14.662 02:14:14 -- bdev/nbd_common.sh@6 -- # set -e 00:44:14.662 02:14:14 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:44:14.662 02:14:14 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:14.663 02:14:14 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:44:14.663 02:14:14 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:44:14.663 02:14:14 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:44:14.663 02:14:14 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:44:14.663 02:14:14 -- bdev/blockdev.sh@20 -- # : 00:44:14.663 02:14:14 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:44:14.663 02:14:14 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:44:14.663 02:14:14 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:44:14.663 02:14:14 -- bdev/blockdev.sh@674 -- # uname -s 00:44:14.663 02:14:14 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:44:14.663 02:14:14 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:44:14.663 02:14:14 -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:44:14.663 02:14:14 -- bdev/blockdev.sh@683 -- # crypto_device= 00:44:14.663 02:14:14 -- bdev/blockdev.sh@684 -- # dek= 00:44:14.663 02:14:14 -- bdev/blockdev.sh@685 -- # env_ctx= 00:44:14.663 02:14:14 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:44:14.663 02:14:14 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:44:14.663 02:14:14 -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:44:14.663 02:14:14 -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:44:14.663 02:14:14 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:44:14.663 02:14:14 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=151231 00:44:14.663 02:14:14 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:14.663 02:14:14 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:44:14.663 02:14:14 -- bdev/blockdev.sh@49 -- # waitforlisten 151231 00:44:14.663 02:14:14 -- common/autotest_common.sh@817 -- # '[' -z 151231 ']' 00:44:14.663 02:14:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:14.663 02:14:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:14.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:14.663 02:14:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:14.663 02:14:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:14.663 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:44:14.663 [2024-04-24 02:14:14.740289] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:14.663 [2024-04-24 02:14:14.740475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151231 ] 00:44:14.921 [2024-04-24 02:14:14.901864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:15.179 [2024-04-24 02:14:15.116102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:16.114 02:14:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:16.114 02:14:15 -- common/autotest_common.sh@850 -- # return 0 00:44:16.114 02:14:15 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:44:16.114 02:14:15 -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:44:16.114 02:14:15 -- bdev/blockdev.sh@280 -- # rpc_cmd 00:44:16.114 02:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:15 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 Malloc0 00:44:16.114 Malloc1 00:44:16.114 Malloc2 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.114 02:14:16 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:44:16.114 02:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.114 02:14:16 -- bdev/blockdev.sh@740 -- # cat 00:44:16.114 02:14:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:44:16.114 02:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.114 02:14:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:44:16.114 02:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.114 02:14:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:44:16.114 02:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.114 02:14:16 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:44:16.114 02:14:16 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:44:16.114 02:14:16 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:44:16.114 02:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:16.114 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:44:16.114 02:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:16.373 02:14:16 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:44:16.373 02:14:16 -- bdev/blockdev.sh@749 -- # jq -r .name 00:44:16.373 02:14:16 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "272e987c-8af3-4ca7-8f03-de468aceedff"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "272e987c-8af3-4ca7-8f03-de468aceedff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "272e987c-8af3-4ca7-8f03-de468aceedff",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fab3aed2-5ba0-40ea-95a4-e6caf55c249d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "edce53a1-f39d-4014-8892-2fa2a91bb075",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1f136cc7-9efd-4f79-9dde-f544da919ab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:44:16.373 02:14:16 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:44:16.373 02:14:16 -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:44:16.373 02:14:16 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:44:16.373 02:14:16 -- bdev/blockdev.sh@754 -- # killprocess 151231 00:44:16.373 02:14:16 -- common/autotest_common.sh@936 -- # '[' -z 151231 ']' 00:44:16.373 02:14:16 -- common/autotest_common.sh@940 -- # kill -0 151231 00:44:16.373 02:14:16 -- common/autotest_common.sh@941 -- # uname 00:44:16.373 02:14:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:16.373 02:14:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151231 00:44:16.373 02:14:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:16.373 02:14:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:16.373 killing process with pid 151231 00:44:16.373 02:14:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151231' 00:44:16.373 02:14:16 -- common/autotest_common.sh@955 -- # kill 151231 00:44:16.373 02:14:16 -- common/autotest_common.sh@960 -- # wait 151231 00:44:19.703 02:14:19 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:19.703 02:14:19 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:44:19.703 02:14:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:44:19.703 02:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:19.703 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:44:19.703 ************************************ 00:44:19.703 START TEST bdev_hello_world 00:44:19.703 ************************************ 00:44:19.703 02:14:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:44:19.703 [2024-04-24 02:14:19.207156] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:19.703 [2024-04-24 02:14:19.207370] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151308 ] 00:44:19.703 [2024-04-24 02:14:19.388204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:19.703 [2024-04-24 02:14:19.595257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:20.270 [2024-04-24 02:14:20.138081] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:44:20.270 [2024-04-24 02:14:20.138173] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:44:20.270 [2024-04-24 02:14:20.138214] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:44:20.270 [2024-04-24 02:14:20.138794] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:44:20.270 [2024-04-24 02:14:20.138997] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:44:20.270 [2024-04-24 02:14:20.139036] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:44:20.270 [2024-04-24 02:14:20.139143] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:44:20.270 00:44:20.270 [2024-04-24 02:14:20.139180] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:44:21.700 00:44:21.700 real 0m2.630s 00:44:21.700 user 0m2.219s 00:44:21.700 sys 0m0.285s 00:44:21.700 02:14:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:21.700 02:14:21 -- common/autotest_common.sh@10 -- # set +x 00:44:21.700 ************************************ 00:44:21.700 END TEST bdev_hello_world 00:44:21.700 ************************************ 00:44:21.959 02:14:21 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:44:21.959 02:14:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:44:21.959 02:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:21.959 02:14:21 -- common/autotest_common.sh@10 -- # set +x 00:44:21.959 ************************************ 00:44:21.959 START TEST bdev_bounds 00:44:21.959 ************************************ 00:44:21.959 02:14:21 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:44:21.959 02:14:21 -- bdev/blockdev.sh@290 -- # bdevio_pid=151366 00:44:21.959 02:14:21 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:44:21.959 Process bdevio pid: 151366 00:44:21.959 02:14:21 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 151366' 00:44:21.959 02:14:21 -- bdev/blockdev.sh@293 -- # waitforlisten 151366 00:44:21.959 02:14:21 -- common/autotest_common.sh@817 -- # '[' -z 151366 ']' 00:44:21.959 02:14:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.959 02:14:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:21.959 02:14:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.959 02:14:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:21.959 02:14:21 -- common/autotest_common.sh@10 -- # set +x 00:44:21.959 02:14:21 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:44:21.959 [2024-04-24 02:14:21.933738] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:21.959 [2024-04-24 02:14:21.934164] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151366 ] 00:44:22.218 [2024-04-24 02:14:22.128148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:22.532 [2024-04-24 02:14:22.439795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:22.532 [2024-04-24 02:14:22.439908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:22.532 [2024-04-24 02:14:22.440292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:23.098 02:14:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:23.098 02:14:23 -- common/autotest_common.sh@850 -- # return 0 00:44:23.098 02:14:23 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:44:23.355 I/O targets: 00:44:23.355 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:44:23.355 00:44:23.355 00:44:23.355 CUnit - A unit testing framework for C - Version 2.1-3 00:44:23.355 http://cunit.sourceforge.net/ 00:44:23.355 00:44:23.355 00:44:23.355 Suite: bdevio tests on: raid5f 00:44:23.355 Test: blockdev write read block ...passed 00:44:23.355 Test: blockdev write zeroes read block ...passed 00:44:23.355 Test: blockdev write zeroes read no split ...passed 00:44:23.355 Test: blockdev write zeroes read split ...passed 00:44:23.613 Test: blockdev write zeroes read split partial ...passed 00:44:23.613 Test: blockdev reset ...passed 00:44:23.613 Test: blockdev write read 8 blocks ...passed 00:44:23.613 Test: blockdev write read size > 128k ...passed 00:44:23.613 Test: blockdev write read invalid size ...passed 00:44:23.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:23.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:23.613 Test: blockdev write read max offset ...passed 00:44:23.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:23.613 Test: blockdev writev readv 8 blocks ...passed 00:44:23.613 Test: blockdev writev readv 30 x 1block ...passed 00:44:23.613 Test: blockdev writev readv block ...passed 00:44:23.613 Test: blockdev writev readv size > 128k ...passed 00:44:23.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:23.613 Test: blockdev comparev and writev ...passed 00:44:23.613 Test: blockdev nvme passthru rw ...passed 00:44:23.613 Test: blockdev nvme passthru vendor specific ...passed 00:44:23.613 Test: blockdev nvme admin passthru ...passed 00:44:23.613 Test: blockdev copy ...passed 00:44:23.613 00:44:23.613 Run Summary: Type Total Ran Passed Failed Inactive 00:44:23.613 suites 1 1 n/a 0 0 00:44:23.613 tests 23 23 23 0 0 00:44:23.613 asserts 130 130 130 0 n/a 00:44:23.613 00:44:23.613 Elapsed time = 0.646 seconds 00:44:23.613 0 00:44:23.613 02:14:23 -- bdev/blockdev.sh@295 -- # killprocess 151366 00:44:23.613 02:14:23 -- common/autotest_common.sh@936 -- # '[' -z 151366 ']' 00:44:23.613 02:14:23 -- common/autotest_common.sh@940 -- # kill -0 151366 00:44:23.613 02:14:23 -- common/autotest_common.sh@941 -- # uname 00:44:23.613 02:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:23.613 02:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151366 00:44:23.613 02:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:23.613 02:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:23.613 02:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151366' 00:44:23.613 killing process with pid 151366 00:44:23.613 02:14:23 -- common/autotest_common.sh@955 -- # kill 151366 00:44:23.613 02:14:23 -- common/autotest_common.sh@960 -- # wait 151366 00:44:25.514 02:14:25 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:44:25.514 00:44:25.514 real 0m3.486s 00:44:25.514 user 0m8.126s 00:44:25.514 sys 0m0.436s 00:44:25.514 ************************************ 00:44:25.514 END TEST bdev_bounds 00:44:25.514 ************************************ 00:44:25.514 02:14:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:25.514 02:14:25 -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 02:14:25 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:44:25.514 02:14:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:44:25.514 02:14:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:25.514 02:14:25 -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 ************************************ 00:44:25.514 START TEST bdev_nbd 00:44:25.514 ************************************ 00:44:25.514 02:14:25 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:44:25.514 02:14:25 -- bdev/blockdev.sh@300 -- # uname -s 00:44:25.514 02:14:25 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:44:25.514 02:14:25 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:25.514 02:14:25 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:25.514 02:14:25 -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:44:25.514 02:14:25 -- bdev/blockdev.sh@304 -- # local bdev_all 00:44:25.514 02:14:25 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:44:25.514 02:14:25 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:44:25.514 02:14:25 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:44:25.514 02:14:25 -- bdev/blockdev.sh@311 -- # local nbd_all 00:44:25.514 02:14:25 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:44:25.514 02:14:25 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:44:25.514 02:14:25 -- bdev/blockdev.sh@314 -- # local nbd_list 00:44:25.514 02:14:25 -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:44:25.514 02:14:25 -- bdev/blockdev.sh@315 -- # local bdev_list 00:44:25.514 02:14:25 -- bdev/blockdev.sh@318 -- # nbd_pid=151446 00:44:25.514 02:14:25 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:44:25.514 02:14:25 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:44:25.514 02:14:25 -- bdev/blockdev.sh@320 -- # waitforlisten 151446 /var/tmp/spdk-nbd.sock 00:44:25.514 02:14:25 -- common/autotest_common.sh@817 -- # '[' -z 151446 ']' 00:44:25.514 02:14:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:44:25.514 02:14:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:25.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:44:25.514 02:14:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:44:25.514 02:14:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:25.514 02:14:25 -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 [2024-04-24 02:14:25.515230] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:25.515 [2024-04-24 02:14:25.515381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:25.773 [2024-04-24 02:14:25.688625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.031 [2024-04-24 02:14:25.963288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.598 02:14:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:26.598 02:14:26 -- common/autotest_common.sh@850 -- # return 0 00:44:26.598 02:14:26 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@24 -- # local i 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:44:26.598 02:14:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:44:26.856 02:14:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:44:26.856 02:14:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:44:26.856 02:14:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:44:26.856 02:14:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:44:26.856 02:14:26 -- common/autotest_common.sh@855 -- # local i 00:44:26.856 02:14:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:44:26.856 02:14:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:44:26.856 02:14:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:44:26.856 02:14:26 -- common/autotest_common.sh@859 -- # break 00:44:26.856 02:14:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:44:26.856 02:14:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:44:26.856 02:14:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:26.856 1+0 records in 00:44:26.856 1+0 records out 00:44:26.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245351 s, 16.7 MB/s 00:44:26.856 02:14:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:26.856 02:14:26 -- common/autotest_common.sh@872 -- # size=4096 00:44:26.856 02:14:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:26.856 02:14:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:44:26.856 02:14:26 -- common/autotest_common.sh@875 -- # return 0 00:44:26.857 02:14:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:44:26.857 02:14:26 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:44:26.857 02:14:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:44:27.116 { 00:44:27.116 "nbd_device": "/dev/nbd0", 00:44:27.116 "bdev_name": "raid5f" 00:44:27.116 } 00:44:27.116 ]' 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:44:27.116 { 00:44:27.116 "nbd_device": "/dev/nbd0", 00:44:27.116 "bdev_name": "raid5f" 00:44:27.116 } 00:44:27.116 ]' 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@51 -- # local i 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:27.116 02:14:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@41 -- # break 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@45 -- # return 0 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:27.374 02:14:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@65 -- # true 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@65 -- # count=0 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@122 -- # count=0 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@127 -- # return 0 00:44:27.941 02:14:27 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@12 -- # local i 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:27.941 02:14:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:44:28.200 /dev/nbd0 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:28.200 02:14:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:44:28.200 02:14:28 -- common/autotest_common.sh@855 -- # local i 00:44:28.200 02:14:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:44:28.200 02:14:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:44:28.200 02:14:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:44:28.200 02:14:28 -- common/autotest_common.sh@859 -- # break 00:44:28.200 02:14:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:44:28.200 02:14:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:44:28.200 02:14:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:28.200 1+0 records in 00:44:28.200 1+0 records out 00:44:28.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302635 s, 13.5 MB/s 00:44:28.200 02:14:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:28.200 02:14:28 -- common/autotest_common.sh@872 -- # size=4096 00:44:28.200 02:14:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:28.200 02:14:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:44:28.200 02:14:28 -- common/autotest_common.sh@875 -- # return 0 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:28.200 02:14:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:44:28.459 { 00:44:28.459 "nbd_device": "/dev/nbd0", 00:44:28.459 "bdev_name": "raid5f" 00:44:28.459 } 00:44:28.459 ]' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:44:28.459 { 00:44:28.459 "nbd_device": "/dev/nbd0", 00:44:28.459 "bdev_name": "raid5f" 00:44:28.459 } 00:44:28.459 ]' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@65 -- # count=1 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@66 -- # echo 1 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@95 -- # count=1 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:44:28.459 256+0 records in 00:44:28.459 256+0 records out 00:44:28.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112137 s, 93.5 MB/s 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:44:28.459 256+0 records in 00:44:28.459 256+0 records out 00:44:28.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330367 s, 31.7 MB/s 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@51 -- # local i 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:28.459 02:14:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@41 -- # break 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@45 -- # return 0 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:29.026 02:14:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:29.026 02:14:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:29.026 02:14:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:29.026 02:14:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@65 -- # true 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@65 -- # count=0 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@104 -- # count=0 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@109 -- # return 0 00:44:29.284 02:14:29 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:44:29.284 02:14:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:44:29.542 malloc_lvol_verify 00:44:29.542 02:14:29 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:44:29.800 c6d82728-ea5c-422a-9fe1-0566f8be0487 00:44:29.800 02:14:29 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:44:30.059 2d2bf649-7cf1-454e-a388-69ee0a833114 00:44:30.059 02:14:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:44:30.317 /dev/nbd0 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:44:30.317 mke2fs 1.46.5 (30-Dec-2021) 00:44:30.317 00:44:30.317 Filesystem too small for a journal 00:44:30.317 Discarding device blocks: 0/1024 done 00:44:30.317 Creating filesystem with 1024 4k blocks and 1024 inodes 00:44:30.317 00:44:30.317 Allocating group tables: 0/1 done 00:44:30.317 Writing inode tables: 0/1 done 00:44:30.317 Writing superblocks and filesystem accounting information: 0/1 done 00:44:30.317 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@51 -- # local i 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:30.317 02:14:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@41 -- # break 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@45 -- # return 0 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:44:30.576 02:14:30 -- bdev/nbd_common.sh@147 -- # return 0 00:44:30.576 02:14:30 -- bdev/blockdev.sh@326 -- # killprocess 151446 00:44:30.576 02:14:30 -- common/autotest_common.sh@936 -- # '[' -z 151446 ']' 00:44:30.576 02:14:30 -- common/autotest_common.sh@940 -- # kill -0 151446 00:44:30.576 02:14:30 -- common/autotest_common.sh@941 -- # uname 00:44:30.576 02:14:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:30.576 02:14:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151446 00:44:30.576 02:14:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:30.576 02:14:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:30.576 02:14:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151446' 00:44:30.576 killing process with pid 151446 00:44:30.576 02:14:30 -- common/autotest_common.sh@955 -- # kill 151446 00:44:30.576 02:14:30 -- common/autotest_common.sh@960 -- # wait 151446 00:44:32.477 02:14:32 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:44:32.477 00:44:32.477 real 0m6.948s 00:44:32.477 user 0m9.588s 00:44:32.477 sys 0m1.574s 00:44:32.477 02:14:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:32.477 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:44:32.477 ************************************ 00:44:32.477 END TEST bdev_nbd 00:44:32.477 ************************************ 00:44:32.477 02:14:32 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:44:32.477 02:14:32 -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:44:32.477 02:14:32 -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:44:32.477 02:14:32 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:32.477 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:44:32.477 ************************************ 00:44:32.477 START TEST bdev_fio 00:44:32.477 ************************************ 00:44:32.477 02:14:32 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:44:32.477 02:14:32 -- bdev/blockdev.sh@331 -- # local env_context 00:44:32.477 02:14:32 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:44:32.477 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:44:32.477 02:14:32 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:44:32.477 02:14:32 -- bdev/blockdev.sh@339 -- # echo '' 00:44:32.477 02:14:32 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:44:32.477 02:14:32 -- bdev/blockdev.sh@339 -- # env_context= 00:44:32.477 02:14:32 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:32.477 02:14:32 -- common/autotest_common.sh@1267 -- # local workload=verify 00:44:32.477 02:14:32 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:44:32.477 02:14:32 -- common/autotest_common.sh@1269 -- # local env_context= 00:44:32.477 02:14:32 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:44:32.477 02:14:32 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:32.477 02:14:32 -- common/autotest_common.sh@1287 -- # cat 00:44:32.477 02:14:32 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:44:32.477 02:14:32 -- common/autotest_common.sh@1300 -- # cat 00:44:32.477 02:14:32 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:44:32.478 02:14:32 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:44:32.735 02:14:32 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:44:32.735 02:14:32 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:44:32.735 02:14:32 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:44:32.735 02:14:32 -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:44:32.735 02:14:32 -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:44:32.735 02:14:32 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:44:32.735 02:14:32 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:32.735 02:14:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:44:32.735 02:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:32.735 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:44:32.735 ************************************ 00:44:32.735 START TEST bdev_fio_rw_verify 00:44:32.735 ************************************ 00:44:32.735 02:14:32 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:32.735 02:14:32 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:32.736 02:14:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:44:32.736 02:14:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:32.736 02:14:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:44:32.736 02:14:32 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:32.736 02:14:32 -- common/autotest_common.sh@1327 -- # shift 00:44:32.736 02:14:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:44:32.736 02:14:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:44:32.736 02:14:32 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:32.736 02:14:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:44:32.736 02:14:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:44:32.736 02:14:32 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:44:32.736 02:14:32 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:44:32.736 02:14:32 -- common/autotest_common.sh@1333 -- # break 00:44:32.736 02:14:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:32.736 02:14:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:32.993 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:44:32.993 fio-3.35 00:44:32.993 Starting 1 thread 00:44:45.258 00:44:45.258 job_raid5f: (groupid=0, jobs=1): err= 0: pid=151705: Wed Apr 24 02:14:43 2024 00:44:45.258 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(412MiB/10001msec) 00:44:45.258 slat (usec): min=18, max=106, avg=22.94, stdev= 3.51 00:44:45.258 clat (usec): min=10, max=665, avg=151.93, stdev=56.65 00:44:45.258 lat (usec): min=30, max=718, avg=174.88, stdev=57.46 00:44:45.258 clat percentiles (usec): 00:44:45.258 | 50.000th=[ 153], 99.000th=[ 277], 99.900th=[ 314], 99.990th=[ 379], 00:44:45.258 | 99.999th=[ 619] 00:44:45.258 write: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(428MiB/9873msec); 0 zone resets 00:44:45.258 slat (usec): min=7, max=165, avg=19.20, stdev= 3.97 00:44:45.258 clat (usec): min=69, max=1021, avg=342.57, stdev=53.68 00:44:45.258 lat (usec): min=89, max=1055, avg=361.77, stdev=55.24 00:44:45.258 clat percentiles (usec): 00:44:45.258 | 50.000th=[ 343], 99.000th=[ 474], 99.900th=[ 537], 99.990th=[ 693], 00:44:45.258 | 99.999th=[ 988] 00:44:45.258 bw ( KiB/s): min=41664, max=48960, per=98.65%, avg=43765.05, stdev=2126.82, samples=19 00:44:45.258 iops : min=10416, max=12240, avg=10941.26, stdev=531.70, samples=19 00:44:45.258 lat (usec) : 20=0.01%, 50=0.01%, 100=11.70%, 250=37.46%, 500=50.60% 00:44:45.258 lat (usec) : 750=0.24%, 1000=0.01% 00:44:45.258 lat (msec) : 2=0.01% 00:44:45.258 cpu : usr=99.47%, sys=0.48%, ctx=123, majf=0, minf=7517 00:44:45.258 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:45.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:45.258 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:45.258 issued rwts: total=105484,109498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:45.258 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:45.258 00:44:45.258 Run status group 0 (all jobs): 00:44:45.258 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=412MiB (432MB), run=10001-10001msec 00:44:45.258 WRITE: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=428MiB (449MB), run=9873-9873msec 00:44:45.517 ----------------------------------------------------- 00:44:45.517 Suppressions used: 00:44:45.517 count bytes template 00:44:45.517 1 7 /usr/src/fio/parse.c 00:44:45.517 728 69888 /usr/src/fio/iolog.c 00:44:45.517 1 904 libcrypto.so 00:44:45.517 ----------------------------------------------------- 00:44:45.517 00:44:45.775 ************************************ 00:44:45.775 END TEST bdev_fio_rw_verify 00:44:45.775 ************************************ 00:44:45.775 00:44:45.775 real 0m12.984s 00:44:45.775 user 0m13.856s 00:44:45.775 sys 0m0.773s 00:44:45.775 02:14:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:45.775 02:14:45 -- common/autotest_common.sh@10 -- # set +x 00:44:45.775 02:14:45 -- bdev/blockdev.sh@350 -- # rm -f 00:44:45.775 02:14:45 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:45.775 02:14:45 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:45.775 02:14:45 -- common/autotest_common.sh@1267 -- # local workload=trim 00:44:45.775 02:14:45 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:44:45.775 02:14:45 -- common/autotest_common.sh@1269 -- # local env_context= 00:44:45.775 02:14:45 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:44:45.775 02:14:45 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:45.775 02:14:45 -- common/autotest_common.sh@1287 -- # cat 00:44:45.775 02:14:45 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:44:45.775 02:14:45 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:44:45.775 02:14:45 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:44:45.775 02:14:45 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "272e987c-8af3-4ca7-8f03-de468aceedff"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "272e987c-8af3-4ca7-8f03-de468aceedff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "272e987c-8af3-4ca7-8f03-de468aceedff",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fab3aed2-5ba0-40ea-95a4-e6caf55c249d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "edce53a1-f39d-4014-8892-2fa2a91bb075",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1f136cc7-9efd-4f79-9dde-f544da919ab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:44:45.775 02:14:45 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:44:45.775 02:14:45 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:45.775 02:14:45 -- bdev/blockdev.sh@362 -- # popd 00:44:45.775 /home/vagrant/spdk_repo/spdk 00:44:45.775 02:14:45 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:44:45.775 02:14:45 -- bdev/blockdev.sh@364 -- # return 0 00:44:45.775 00:44:45.775 real 0m13.256s 00:44:45.775 user 0m14.006s 00:44:45.775 sys 0m0.887s 00:44:45.775 02:14:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:45.775 ************************************ 00:44:45.775 END TEST bdev_fio 00:44:45.775 ************************************ 00:44:45.775 02:14:45 -- common/autotest_common.sh@10 -- # set +x 00:44:45.775 02:14:45 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:45.775 02:14:45 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:45.776 02:14:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:44:45.776 02:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:45.776 02:14:45 -- common/autotest_common.sh@10 -- # set +x 00:44:46.034 ************************************ 00:44:46.034 START TEST bdev_verify 00:44:46.034 ************************************ 00:44:46.034 02:14:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:46.034 [2024-04-24 02:14:45.956526] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:46.034 [2024-04-24 02:14:45.956817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151886 ] 00:44:46.293 [2024-04-24 02:14:46.144912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:46.552 [2024-04-24 02:14:46.480302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:46.552 [2024-04-24 02:14:46.480312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.125 Running I/O for 5 seconds... 00:44:52.412 00:44:52.412 Latency(us) 00:44:52.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:52.412 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:52.412 Verification LBA range: start 0x0 length 0x2000 00:44:52.412 raid5f : 5.01 6587.09 25.73 0.00 0.00 29113.22 243.81 26339.23 00:44:52.412 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:52.412 Verification LBA range: start 0x2000 length 0x2000 00:44:52.412 raid5f : 5.01 6673.70 26.07 0.00 0.00 28660.39 200.90 26214.40 00:44:52.412 =================================================================================================================== 00:44:52.412 Total : 13260.79 51.80 0.00 0.00 28885.27 200.90 26339.23 00:44:53.786 00:44:53.786 real 0m7.925s 00:44:53.786 user 0m14.364s 00:44:53.786 sys 0m0.284s 00:44:53.786 02:14:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:53.786 02:14:53 -- common/autotest_common.sh@10 -- # set +x 00:44:53.786 ************************************ 00:44:53.786 END TEST bdev_verify 00:44:53.786 ************************************ 00:44:53.786 02:14:53 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:53.786 02:14:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:44:53.786 02:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:53.786 02:14:53 -- common/autotest_common.sh@10 -- # set +x 00:44:54.044 ************************************ 00:44:54.044 START TEST bdev_verify_big_io 00:44:54.044 ************************************ 00:44:54.044 02:14:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:54.044 [2024-04-24 02:14:53.994390] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:44:54.044 [2024-04-24 02:14:53.994632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152003 ] 00:44:54.302 [2024-04-24 02:14:54.185626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:54.560 [2024-04-24 02:14:54.469535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.560 [2024-04-24 02:14:54.469538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:55.179 Running I/O for 5 seconds... 00:45:00.468 00:45:00.468 Latency(us) 00:45:00.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:00.468 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:00.468 Verification LBA range: start 0x0 length 0x200 00:45:00.468 raid5f : 5.12 396.37 24.77 0.00 0.00 7918089.14 192.12 349525.33 00:45:00.468 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:00.468 Verification LBA range: start 0x200 length 0x200 00:45:00.468 raid5f : 5.19 402.90 25.18 0.00 0.00 7659670.97 186.27 345530.76 00:45:00.468 =================================================================================================================== 00:45:00.468 Total : 799.27 49.95 0.00 0.00 7786937.54 186.27 349525.33 00:45:02.370 00:45:02.370 real 0m8.153s 00:45:02.370 user 0m14.812s 00:45:02.370 sys 0m0.317s 00:45:02.370 ************************************ 00:45:02.370 END TEST bdev_verify_big_io 00:45:02.370 ************************************ 00:45:02.370 02:15:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:02.370 02:15:02 -- common/autotest_common.sh@10 -- # set +x 00:45:02.370 02:15:02 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:02.370 02:15:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:45:02.370 02:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:02.370 02:15:02 -- common/autotest_common.sh@10 -- # set +x 00:45:02.370 ************************************ 00:45:02.370 START TEST bdev_write_zeroes 00:45:02.370 ************************************ 00:45:02.370 02:15:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:02.370 [2024-04-24 02:15:02.216720] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:45:02.370 [2024-04-24 02:15:02.216877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152122 ] 00:45:02.370 [2024-04-24 02:15:02.379044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.630 [2024-04-24 02:15:02.613312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:03.195 Running I/O for 1 seconds... 00:45:04.570 00:45:04.570 Latency(us) 00:45:04.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:04.570 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:45:04.570 raid5f : 1.00 22785.27 89.00 0.00 0.00 5599.07 1771.03 7146.54 00:45:04.570 =================================================================================================================== 00:45:04.570 Total : 22785.27 89.00 0.00 0.00 5599.07 1771.03 7146.54 00:45:05.986 00:45:05.986 real 0m3.792s 00:45:05.986 user 0m3.426s 00:45:05.986 sys 0m0.252s 00:45:05.986 02:15:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:05.986 ************************************ 00:45:05.986 END TEST bdev_write_zeroes 00:45:05.986 ************************************ 00:45:05.986 02:15:05 -- common/autotest_common.sh@10 -- # set +x 00:45:05.986 02:15:05 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:05.986 02:15:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:45:05.986 02:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:05.986 02:15:05 -- common/autotest_common.sh@10 -- # set +x 00:45:05.986 ************************************ 00:45:05.986 START TEST bdev_json_nonenclosed 00:45:05.986 ************************************ 00:45:05.986 02:15:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:06.354 [2024-04-24 02:15:06.106232] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:45:06.354 [2024-04-24 02:15:06.106400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152198 ] 00:45:06.354 [2024-04-24 02:15:06.268942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:06.611 [2024-04-24 02:15:06.489388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:06.611 [2024-04-24 02:15:06.489505] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:45:06.611 [2024-04-24 02:15:06.489540] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:45:06.611 [2024-04-24 02:15:06.489565] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:07.177 00:45:07.178 real 0m0.925s 00:45:07.178 user 0m0.684s 00:45:07.178 sys 0m0.140s 00:45:07.178 02:15:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:07.178 ************************************ 00:45:07.178 END TEST bdev_json_nonenclosed 00:45:07.178 ************************************ 00:45:07.178 02:15:06 -- common/autotest_common.sh@10 -- # set +x 00:45:07.178 02:15:07 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:07.178 02:15:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:45:07.178 02:15:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:07.178 02:15:07 -- common/autotest_common.sh@10 -- # set +x 00:45:07.178 ************************************ 00:45:07.178 START TEST bdev_json_nonarray 00:45:07.178 ************************************ 00:45:07.178 02:15:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:07.178 [2024-04-24 02:15:07.143362] Starting SPDK v24.05-pre git sha1 3f3de12cc / DPDK 23.11.0 initialization... 00:45:07.178 [2024-04-24 02:15:07.143562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152241 ] 00:45:07.436 [2024-04-24 02:15:07.324815] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:07.694 [2024-04-24 02:15:07.537708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.694 [2024-04-24 02:15:07.537817] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:45:07.694 [2024-04-24 02:15:07.537853] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:45:07.694 [2024-04-24 02:15:07.537884] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:07.952 00:45:07.952 real 0m0.960s 00:45:07.952 user 0m0.672s 00:45:07.952 sys 0m0.187s 00:45:07.952 ************************************ 00:45:07.952 END TEST bdev_json_nonarray 00:45:07.952 ************************************ 00:45:07.952 02:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:07.952 02:15:08 -- common/autotest_common.sh@10 -- # set +x 00:45:08.210 02:15:08 -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:45:08.210 02:15:08 -- bdev/blockdev.sh@811 -- # cleanup 00:45:08.210 02:15:08 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:45:08.210 02:15:08 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:08.210 02:15:08 -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:45:08.210 02:15:08 -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:45:08.210 00:45:08.210 real 0m53.526s 00:45:08.210 user 1m12.827s 00:45:08.210 sys 0m5.429s 00:45:08.210 02:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:08.210 02:15:08 -- common/autotest_common.sh@10 -- # set +x 00:45:08.210 ************************************ 00:45:08.210 END TEST blockdev_raid5f 00:45:08.210 ************************************ 00:45:08.210 02:15:08 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:45:08.210 02:15:08 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:45:08.210 02:15:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:45:08.210 02:15:08 -- common/autotest_common.sh@10 -- # set +x 00:45:08.210 02:15:08 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:45:08.210 02:15:08 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:45:08.210 02:15:08 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:45:08.210 02:15:08 -- common/autotest_common.sh@10 -- # set +x 00:45:10.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:10.112 Waiting for block devices as requested 00:45:10.112 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:10.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:10.679 Cleaning 00:45:10.679 Removing: /var/run/dpdk/spdk0/config 00:45:10.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:10.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:10.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:10.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:10.679 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:10.679 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:10.679 Removing: /dev/shm/spdk_tgt_trace.pid110546 00:45:10.679 Removing: /var/run/dpdk/spdk0 00:45:10.938 Removing: /var/run/dpdk/spdk_pid110263 00:45:10.938 Removing: /var/run/dpdk/spdk_pid110546 00:45:10.938 Removing: /var/run/dpdk/spdk_pid110828 00:45:10.938 Removing: /var/run/dpdk/spdk_pid110958 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111022 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111187 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111209 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111389 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111668 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111862 00:45:10.938 Removing: /var/run/dpdk/spdk_pid111979 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112107 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112241 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112371 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112428 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112482 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112567 00:45:10.938 Removing: /var/run/dpdk/spdk_pid112698 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113243 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113334 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113422 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113450 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113623 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113649 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113825 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113853 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113938 00:45:10.938 Removing: /var/run/dpdk/spdk_pid113961 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114051 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114074 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114293 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114347 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114399 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114491 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114595 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114650 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114758 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114824 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114893 00:45:10.938 Removing: /var/run/dpdk/spdk_pid114955 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115022 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115089 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115159 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115220 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115287 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115356 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115421 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115495 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115565 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115626 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115687 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115754 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115817 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115887 00:45:10.938 Removing: /var/run/dpdk/spdk_pid115957 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116021 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116088 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116199 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116342 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116540 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116655 00:45:10.938 Removing: /var/run/dpdk/spdk_pid116733 00:45:10.938 Removing: /var/run/dpdk/spdk_pid118019 00:45:10.938 Removing: /var/run/dpdk/spdk_pid118250 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118479 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118614 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118776 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118866 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118909 00:45:11.197 Removing: /var/run/dpdk/spdk_pid118945 00:45:11.197 Removing: /var/run/dpdk/spdk_pid119452 00:45:11.197 Removing: /var/run/dpdk/spdk_pid119552 00:45:11.197 Removing: /var/run/dpdk/spdk_pid119678 00:45:11.197 Removing: /var/run/dpdk/spdk_pid119753 00:45:11.197 Removing: /var/run/dpdk/spdk_pid121028 00:45:11.197 Removing: /var/run/dpdk/spdk_pid121980 00:45:11.197 Removing: /var/run/dpdk/spdk_pid122918 00:45:11.197 Removing: /var/run/dpdk/spdk_pid124083 00:45:11.197 Removing: /var/run/dpdk/spdk_pid125197 00:45:11.197 Removing: /var/run/dpdk/spdk_pid126310 00:45:11.197 Removing: /var/run/dpdk/spdk_pid127864 00:45:11.197 Removing: /var/run/dpdk/spdk_pid129147 00:45:11.197 Removing: /var/run/dpdk/spdk_pid130420 00:45:11.197 Removing: /var/run/dpdk/spdk_pid131111 00:45:11.197 Removing: /var/run/dpdk/spdk_pid131682 00:45:11.197 Removing: /var/run/dpdk/spdk_pid132346 00:45:11.197 Removing: /var/run/dpdk/spdk_pid132834 00:45:11.197 Removing: /var/run/dpdk/spdk_pid133429 00:45:11.197 Removing: /var/run/dpdk/spdk_pid134011 00:45:11.197 Removing: /var/run/dpdk/spdk_pid134718 00:45:11.197 Removing: /var/run/dpdk/spdk_pid135259 00:45:11.197 Removing: /var/run/dpdk/spdk_pid136726 00:45:11.197 Removing: /var/run/dpdk/spdk_pid137369 00:45:11.197 Removing: /var/run/dpdk/spdk_pid137932 00:45:11.197 Removing: /var/run/dpdk/spdk_pid139543 00:45:11.197 Removing: /var/run/dpdk/spdk_pid140241 00:45:11.197 Removing: /var/run/dpdk/spdk_pid140893 00:45:11.197 Removing: /var/run/dpdk/spdk_pid141691 00:45:11.197 Removing: /var/run/dpdk/spdk_pid141760 00:45:11.197 Removing: /var/run/dpdk/spdk_pid141816 00:45:11.197 Removing: /var/run/dpdk/spdk_pid141887 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142038 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142199 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142431 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142741 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142769 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142840 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142874 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142910 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142954 00:45:11.197 Removing: /var/run/dpdk/spdk_pid142985 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143017 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143068 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143100 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143132 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143175 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143213 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143246 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143279 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143319 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143348 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143387 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143419 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143454 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143517 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143552 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143599 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143704 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143755 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143795 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143849 00:45:11.197 Removing: /var/run/dpdk/spdk_pid143882 00:45:11.456 Removing: /var/run/dpdk/spdk_pid143919 00:45:11.456 Removing: /var/run/dpdk/spdk_pid143991 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144018 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144079 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144112 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144141 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144177 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144205 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144238 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144268 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144297 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144358 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144417 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144457 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144513 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144551 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144576 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144649 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144687 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144748 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144781 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144810 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144846 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144875 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144910 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144939 00:45:11.456 Removing: /var/run/dpdk/spdk_pid144975 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145097 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145217 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145403 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145449 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145512 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145586 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145635 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145676 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145710 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145772 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145813 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145918 00:45:11.456 Removing: /var/run/dpdk/spdk_pid145994 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146058 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146387 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146539 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146598 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146706 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146807 00:45:11.456 Removing: /var/run/dpdk/spdk_pid146857 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147140 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147259 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147379 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147445 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147497 00:45:11.456 Removing: /var/run/dpdk/spdk_pid147587 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148048 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148102 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148440 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148553 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148670 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148738 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148781 00:45:11.456 Removing: /var/run/dpdk/spdk_pid148817 00:45:11.456 Removing: /var/run/dpdk/spdk_pid150247 00:45:11.456 Removing: /var/run/dpdk/spdk_pid150403 00:45:11.456 Removing: /var/run/dpdk/spdk_pid150408 00:45:11.456 Removing: /var/run/dpdk/spdk_pid150430 00:45:11.456 Removing: /var/run/dpdk/spdk_pid150942 00:45:11.456 Removing: /var/run/dpdk/spdk_pid151054 00:45:11.456 Removing: /var/run/dpdk/spdk_pid151231 00:45:11.456 Removing: /var/run/dpdk/spdk_pid151308 00:45:11.714 Removing: /var/run/dpdk/spdk_pid151366 00:45:11.714 Removing: /var/run/dpdk/spdk_pid151685 00:45:11.714 Removing: /var/run/dpdk/spdk_pid151886 00:45:11.714 Removing: /var/run/dpdk/spdk_pid152003 00:45:11.714 Removing: /var/run/dpdk/spdk_pid152122 00:45:11.714 Removing: /var/run/dpdk/spdk_pid152198 00:45:11.714 Removing: /var/run/dpdk/spdk_pid152241 00:45:11.714 Clean 00:45:11.714 02:15:11 -- common/autotest_common.sh@1437 -- # return 0 00:45:11.714 02:15:11 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:45:11.714 02:15:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:45:11.714 02:15:11 -- common/autotest_common.sh@10 -- # set +x 00:45:11.714 02:15:11 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:45:11.714 02:15:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:45:11.714 02:15:11 -- common/autotest_common.sh@10 -- # set +x 00:45:11.714 02:15:11 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:11.714 02:15:11 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:45:11.714 02:15:11 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:45:11.714 02:15:11 -- spdk/autotest.sh@389 -- # hash lcov 00:45:11.714 02:15:11 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:45:11.714 02:15:11 -- spdk/autotest.sh@391 -- # hostname 00:45:11.972 02:15:11 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:45:11.972 geninfo: WARNING: invalid characters removed from testname! 00:45:58.727 02:15:53 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:58.727 02:15:58 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:02.098 02:16:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:05.383 02:16:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:07.930 02:16:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:11.215 02:16:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:13.748 02:16:13 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:13.748 02:16:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:13.748 02:16:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:13.748 02:16:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:13.748 02:16:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:13.748 02:16:13 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:13.748 02:16:13 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:13.748 02:16:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:13.748 02:16:13 -- paths/export.sh@5 -- $ export PATH 00:46:13.748 02:16:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:13.748 02:16:13 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:46:13.748 02:16:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:46:13.748 02:16:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713924973.XXXXXX 00:46:13.748 02:16:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713924973.uxQfWq 00:46:13.748 02:16:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:46:13.748 02:16:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:46:13.748 02:16:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:46:13.748 02:16:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:46:13.748 02:16:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:46:13.748 02:16:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:46:13.748 02:16:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:46:13.748 02:16:13 -- common/autotest_common.sh@10 -- $ set +x 00:46:13.748 02:16:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:46:13.748 02:16:13 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:46:13.748 02:16:13 -- pm/common@17 -- $ local monitor 00:46:13.748 02:16:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:13.748 02:16:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153762 00:46:13.748 02:16:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:13.748 02:16:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153764 00:46:13.748 02:16:13 -- pm/common@26 -- $ sleep 1 00:46:13.748 02:16:13 -- pm/common@21 -- $ date +%s 00:46:13.748 02:16:13 -- pm/common@21 -- $ date +%s 00:46:13.748 02:16:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713924973 00:46:13.748 02:16:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713924973 00:46:13.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713924973_collect-cpu-load.pm.log 00:46:13.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713924973_collect-vmstat.pm.log 00:46:14.683 02:16:14 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:46:14.683 02:16:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:46:14.683 02:16:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:46:14.683 02:16:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:46:14.683 02:16:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:46:14.683 02:16:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:46:14.683 02:16:14 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:14.683 02:16:14 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:14.683 02:16:14 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:14.683 02:16:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:46:14.683 02:16:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:14.683 02:16:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:46:14.683 02:16:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:46:14.683 02:16:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:14.683 02:16:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:46:14.683 02:16:14 -- pm/common@45 -- $ pid=153769 00:46:14.683 02:16:14 -- pm/common@52 -- $ sudo kill -TERM 153769 00:46:14.683 02:16:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:14.684 02:16:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:46:14.942 02:16:14 -- pm/common@45 -- $ pid=153770 00:46:14.942 02:16:14 -- pm/common@52 -- $ sudo kill -TERM 153770 00:46:14.942 + [[ -n 2100 ]] 00:46:14.942 + sudo kill 2100 00:46:14.951 [Pipeline] } 00:46:14.970 [Pipeline] // timeout 00:46:14.976 [Pipeline] } 00:46:15.003 [Pipeline] // stage 00:46:15.009 [Pipeline] } 00:46:15.070 [Pipeline] // catchError 00:46:15.078 [Pipeline] stage 00:46:15.080 [Pipeline] { (Stop VM) 00:46:15.091 [Pipeline] sh 00:46:15.374 + vagrant halt 00:46:18.697 ==> default: Halting domain... 00:46:28.693 [Pipeline] sh 00:46:28.975 + vagrant destroy -f 00:46:32.260 ==> default: Removing domain... 00:46:32.273 [Pipeline] sh 00:46:32.553 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_2/output 00:46:32.562 [Pipeline] } 00:46:32.577 [Pipeline] // stage 00:46:32.583 [Pipeline] } 00:46:32.598 [Pipeline] // dir 00:46:32.603 [Pipeline] } 00:46:32.618 [Pipeline] // wrap 00:46:32.625 [Pipeline] } 00:46:32.643 [Pipeline] // catchError 00:46:32.653 [Pipeline] stage 00:46:32.655 [Pipeline] { (Epilogue) 00:46:32.671 [Pipeline] sh 00:46:32.952 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:54.999 [Pipeline] catchError 00:46:55.001 [Pipeline] { 00:46:55.017 [Pipeline] sh 00:46:55.365 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:55.624 Artifacts sizes are good 00:46:55.632 [Pipeline] } 00:46:55.649 [Pipeline] // catchError 00:46:55.661 [Pipeline] archiveArtifacts 00:46:55.669 Archiving artifacts 00:46:56.060 [Pipeline] cleanWs 00:46:56.071 [WS-CLEANUP] Deleting project workspace... 00:46:56.071 [WS-CLEANUP] Deferred wipeout is used... 00:46:56.077 [WS-CLEANUP] done 00:46:56.079 [Pipeline] } 00:46:56.099 [Pipeline] // stage 00:46:56.108 [Pipeline] } 00:46:56.128 [Pipeline] // node 00:46:56.134 [Pipeline] End of Pipeline 00:46:56.180 Finished: SUCCESS